Feb 13 15:31:10.914485 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:31:10.914513 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:31:10.914528 kernel: BIOS-provided physical RAM map: Feb 13 15:31:10.914536 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:31:10.914545 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:31:10.914553 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:31:10.914563 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:31:10.914572 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:31:10.914581 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:31:10.914589 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:31:10.914601 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 15:31:10.914609 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:31:10.914618 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:31:10.914627 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:31:10.914638 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:31:10.914647 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:31:10.914659 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:31:10.914669 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:31:10.914678 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:31:10.914687 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:31:10.914696 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:31:10.914706 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:31:10.914715 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:31:10.914724 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:31:10.914750 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:31:10.914759 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:31:10.914769 kernel: NX (Execute Disable) protection: active Feb 13 15:31:10.914782 kernel: APIC: Static calls initialized Feb 13 15:31:10.914791 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:31:10.914801 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:31:10.914810 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:31:10.914819 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:31:10.914828 kernel: extended physical RAM map: Feb 13 15:31:10.914837 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:31:10.914847 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:31:10.914856 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:31:10.914866 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:31:10.914875 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:31:10.914887 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:31:10.914897 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:31:10.914911 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 15:31:10.914920 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 15:31:10.914930 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 15:31:10.914940 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 15:31:10.914950 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 15:31:10.914963 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:31:10.914973 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:31:10.914982 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:31:10.914992 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:31:10.915002 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:31:10.915012 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:31:10.915022 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:31:10.915032 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:31:10.915042 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:31:10.915055 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:31:10.915065 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:31:10.915075 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:31:10.915084 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:31:10.915094 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:31:10.915104 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:31:10.915114 kernel: efi: EFI v2.7 by EDK II Feb 13 15:31:10.915124 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 15:31:10.915134 kernel: random: crng init done Feb 13 15:31:10.915144 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 15:31:10.915154 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 15:31:10.915176 kernel: secureboot: Secure boot disabled Feb 13 15:31:10.915186 kernel: SMBIOS 2.8 present. Feb 13 15:31:10.915196 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 15:31:10.915206 kernel: Hypervisor detected: KVM Feb 13 15:31:10.915216 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:31:10.915225 kernel: kvm-clock: using sched offset of 2644445633 cycles Feb 13 15:31:10.915236 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:31:10.915247 kernel: tsc: Detected 2794.748 MHz processor Feb 13 15:31:10.915257 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:31:10.915268 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:31:10.915278 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 15:31:10.915291 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:31:10.915301 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:31:10.915311 kernel: Using GB pages for direct mapping Feb 13 15:31:10.915321 kernel: ACPI: Early table checksum verification disabled Feb 13 15:31:10.915331 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 15:31:10.915342 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:31:10.915353 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:31:10.915363 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:31:10.915373 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 15:31:10.915386 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:31:10.915396 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:31:10.915406 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:31:10.915417 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:31:10.915427 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:31:10.915437 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 15:31:10.915447 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 15:31:10.915457 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 15:31:10.915470 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 15:31:10.915479 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 15:31:10.915492 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 15:31:10.915507 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 15:31:10.915524 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 15:31:10.915539 kernel: No NUMA configuration found Feb 13 15:31:10.915557 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 15:31:10.915571 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 15:31:10.915586 kernel: Zone ranges: Feb 13 15:31:10.915602 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:31:10.915625 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 15:31:10.915637 kernel: Normal empty Feb 13 15:31:10.915649 kernel: Movable zone start for each node Feb 13 15:31:10.915664 kernel: Early memory node ranges Feb 13 15:31:10.915679 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:31:10.915691 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 15:31:10.915700 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 15:31:10.915710 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 15:31:10.915720 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 15:31:10.915743 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 15:31:10.915753 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 15:31:10.915763 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 15:31:10.915773 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 15:31:10.915782 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:31:10.915792 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:31:10.915811 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 15:31:10.915824 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:31:10.915833 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 15:31:10.915843 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 15:31:10.915853 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:31:10.915863 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 15:31:10.915876 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 15:31:10.915885 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:31:10.915895 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:31:10.915905 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:31:10.915915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:31:10.915927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:31:10.915938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:31:10.915947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:31:10.915957 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:31:10.915967 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:31:10.915986 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:31:10.915996 kernel: TSC deadline timer available Feb 13 15:31:10.916005 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:31:10.916014 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:31:10.916023 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:31:10.916036 kernel: kvm-guest: setup PV sched yield Feb 13 15:31:10.916045 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 15:31:10.916054 kernel: Booting paravirtualized kernel on KVM Feb 13 15:31:10.916064 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:31:10.916073 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:31:10.916083 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:31:10.916092 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:31:10.916101 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:31:10.916110 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:31:10.916122 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:31:10.916132 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:31:10.916142 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:31:10.916151 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:31:10.916169 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:31:10.916179 kernel: Fallback order for Node 0: 0 Feb 13 15:31:10.916188 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 15:31:10.916198 kernel: Policy zone: DMA32 Feb 13 15:31:10.916210 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:31:10.916219 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Feb 13 15:31:10.916228 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:31:10.916238 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:31:10.916247 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:31:10.916256 kernel: Dynamic Preempt: voluntary Feb 13 15:31:10.916265 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:31:10.916275 kernel: rcu: RCU event tracing is enabled. Feb 13 15:31:10.916284 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:31:10.916297 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:31:10.916306 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:31:10.916315 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:31:10.916324 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:31:10.916333 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:31:10.916343 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:31:10.916352 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:31:10.916361 kernel: Console: colour dummy device 80x25 Feb 13 15:31:10.916370 kernel: printk: console [ttyS0] enabled Feb 13 15:31:10.916382 kernel: ACPI: Core revision 20230628 Feb 13 15:31:10.916391 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:31:10.916400 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:31:10.916409 kernel: x2apic enabled Feb 13 15:31:10.916418 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:31:10.916428 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:31:10.916437 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:31:10.916446 kernel: kvm-guest: setup PV IPIs Feb 13 15:31:10.916455 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:31:10.916467 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:31:10.916476 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 15:31:10.916485 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:31:10.916494 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:31:10.916503 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:31:10.916512 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:31:10.916521 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:31:10.916531 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:31:10.916540 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:31:10.916551 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:31:10.916561 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:31:10.916570 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:31:10.916579 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:31:10.916588 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:31:10.916598 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:31:10.916608 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:31:10.916617 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:31:10.916626 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:31:10.916638 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:31:10.916647 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:31:10.916656 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:31:10.916665 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:31:10.916674 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:31:10.916683 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:31:10.916692 kernel: landlock: Up and running. Feb 13 15:31:10.916701 kernel: SELinux: Initializing. Feb 13 15:31:10.916711 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:31:10.916723 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:31:10.916771 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:31:10.916781 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:31:10.916790 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:31:10.916799 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:31:10.916809 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:31:10.916818 kernel: ... version: 0 Feb 13 15:31:10.916827 kernel: ... bit width: 48 Feb 13 15:31:10.916839 kernel: ... generic registers: 6 Feb 13 15:31:10.916848 kernel: ... value mask: 0000ffffffffffff Feb 13 15:31:10.916857 kernel: ... max period: 00007fffffffffff Feb 13 15:31:10.916866 kernel: ... fixed-purpose events: 0 Feb 13 15:31:10.916875 kernel: ... event mask: 000000000000003f Feb 13 15:31:10.916884 kernel: signal: max sigframe size: 1776 Feb 13 15:31:10.916894 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:31:10.916905 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:31:10.916922 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:31:10.916933 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:31:10.916946 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:31:10.916956 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:31:10.916966 kernel: smpboot: Max logical packages: 1 Feb 13 15:31:10.916976 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 15:31:10.916986 kernel: devtmpfs: initialized Feb 13 15:31:10.916996 kernel: x86/mm: Memory block size: 128MB Feb 13 15:31:10.917006 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 15:31:10.917016 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 15:31:10.917026 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 15:31:10.917039 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 15:31:10.917049 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 15:31:10.917059 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 15:31:10.917069 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:31:10.917080 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:31:10.917090 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:31:10.917100 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:31:10.917111 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:31:10.917124 kernel: audit: type=2000 audit(1739460671.525:1): state=initialized audit_enabled=0 res=1 Feb 13 15:31:10.917134 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:31:10.917144 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:31:10.917155 kernel: cpuidle: using governor menu Feb 13 15:31:10.917178 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:31:10.917189 kernel: dca service started, version 1.12.1 Feb 13 15:31:10.917201 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 15:31:10.917212 kernel: PCI: Using configuration type 1 for base access Feb 13 15:31:10.917223 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:31:10.917237 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:31:10.917248 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:31:10.917259 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:31:10.917270 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:31:10.917281 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:31:10.917291 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:31:10.917302 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:31:10.917313 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:31:10.917324 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:31:10.917338 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:31:10.917349 kernel: ACPI: Interpreter enabled Feb 13 15:31:10.917360 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:31:10.917371 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:31:10.917382 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:31:10.917393 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:31:10.917404 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:31:10.917415 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:31:10.917641 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:31:10.917827 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:31:10.917981 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:31:10.917995 kernel: PCI host bridge to bus 0000:00 Feb 13 15:31:10.918147 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:31:10.918301 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:31:10.918476 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:31:10.918701 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 15:31:10.918884 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 15:31:10.919029 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:31:10.919180 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:31:10.919358 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:31:10.919522 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:31:10.919699 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 15:31:10.919936 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 15:31:10.920096 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 15:31:10.920260 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 15:31:10.920416 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:31:10.920582 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:31:10.920751 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 15:31:10.920911 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 15:31:10.921072 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 15:31:10.921248 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:31:10.921403 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 15:31:10.921556 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 15:31:10.921708 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 15:31:10.921908 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:31:10.922071 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 15:31:10.922236 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 15:31:10.922390 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 15:31:10.922543 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 15:31:10.922713 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:31:10.922912 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:31:10.923072 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:31:10.923240 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 15:31:10.923389 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 15:31:10.923548 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:31:10.923698 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 15:31:10.923713 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:31:10.923725 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:31:10.923751 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:31:10.923769 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:31:10.923784 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:31:10.923793 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:31:10.923803 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:31:10.923812 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:31:10.923822 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:31:10.923831 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:31:10.923840 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:31:10.923850 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:31:10.923859 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:31:10.923871 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:31:10.923881 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:31:10.923890 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:31:10.923900 kernel: iommu: Default domain type: Translated Feb 13 15:31:10.923909 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:31:10.923918 kernel: efivars: Registered efivars operations Feb 13 15:31:10.923928 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:31:10.923937 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:31:10.923956 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 15:31:10.923970 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 15:31:10.923980 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 15:31:10.923991 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 15:31:10.924001 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 15:31:10.924012 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 15:31:10.924023 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 15:31:10.924033 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 15:31:10.924203 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:31:10.924398 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:31:10.924551 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:31:10.924566 kernel: vgaarb: loaded Feb 13 15:31:10.924577 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:31:10.924588 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:31:10.924599 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:31:10.924610 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:31:10.924621 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:31:10.924633 kernel: pnp: PnP ACPI init Feb 13 15:31:10.924820 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 15:31:10.924838 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:31:10.924850 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:31:10.924861 kernel: NET: Registered PF_INET protocol family Feb 13 15:31:10.924894 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:31:10.924909 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:31:10.924920 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:31:10.924931 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:31:10.924946 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:31:10.924960 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:31:10.924971 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:31:10.924983 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:31:10.924995 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:31:10.925006 kernel: NET: Registered PF_XDP protocol family Feb 13 15:31:10.925175 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 15:31:10.925334 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 15:31:10.925485 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:31:10.925624 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:31:10.925885 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:31:10.926027 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 15:31:10.926177 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 15:31:10.926317 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:31:10.926333 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:31:10.926344 kernel: Initialise system trusted keyrings Feb 13 15:31:10.926361 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:31:10.926373 kernel: Key type asymmetric registered Feb 13 15:31:10.926384 kernel: Asymmetric key parser 'x509' registered Feb 13 15:31:10.926396 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:31:10.926407 kernel: io scheduler mq-deadline registered Feb 13 15:31:10.926419 kernel: io scheduler kyber registered Feb 13 15:31:10.926431 kernel: io scheduler bfq registered Feb 13 15:31:10.926442 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:31:10.926455 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:31:10.926470 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:31:10.926485 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:31:10.926497 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:31:10.926508 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:31:10.926520 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:31:10.926532 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:31:10.926547 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:31:10.926702 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:31:10.926866 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:31:10.926882 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 15:31:10.927005 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:31:10 UTC (1739460670) Feb 13 15:31:10.927139 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 15:31:10.927173 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:31:10.927185 kernel: efifb: probing for efifb Feb 13 15:31:10.927202 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 15:31:10.927213 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 15:31:10.927225 kernel: efifb: scrolling: redraw Feb 13 15:31:10.927237 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:31:10.927248 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:31:10.927260 kernel: fb0: EFI VGA frame buffer device Feb 13 15:31:10.927272 kernel: pstore: Using crash dump compression: deflate Feb 13 15:31:10.927283 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:31:10.927295 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:31:10.927309 kernel: Segment Routing with IPv6 Feb 13 15:31:10.927321 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:31:10.927333 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:31:10.927344 kernel: Key type dns_resolver registered Feb 13 15:31:10.927356 kernel: IPI shorthand broadcast: enabled Feb 13 15:31:10.927367 kernel: sched_clock: Marking stable (637004597, 150260184)->(803254649, -15989868) Feb 13 15:31:10.927379 kernel: registered taskstats version 1 Feb 13 15:31:10.927390 kernel: Loading compiled-in X.509 certificates Feb 13 15:31:10.927402 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:31:10.927417 kernel: Key type .fscrypt registered Feb 13 15:31:10.927428 kernel: Key type fscrypt-provisioning registered Feb 13 15:31:10.927440 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:31:10.927455 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:31:10.927466 kernel: ima: No architecture policies found Feb 13 15:31:10.927478 kernel: clk: Disabling unused clocks Feb 13 15:31:10.927489 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:31:10.927501 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:31:10.927513 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:31:10.927527 kernel: Run /init as init process Feb 13 15:31:10.927538 kernel: with arguments: Feb 13 15:31:10.927550 kernel: /init Feb 13 15:31:10.927562 kernel: with environment: Feb 13 15:31:10.927573 kernel: HOME=/ Feb 13 15:31:10.927585 kernel: TERM=linux Feb 13 15:31:10.927596 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:31:10.927611 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:31:10.927629 systemd[1]: Detected virtualization kvm. Feb 13 15:31:10.927641 systemd[1]: Detected architecture x86-64. Feb 13 15:31:10.927654 systemd[1]: Running in initrd. Feb 13 15:31:10.927665 systemd[1]: No hostname configured, using default hostname. Feb 13 15:31:10.927676 systemd[1]: Hostname set to . Feb 13 15:31:10.927689 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:31:10.927701 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:31:10.927714 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:31:10.927743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:31:10.927756 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:31:10.927769 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:31:10.927782 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:31:10.927794 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:31:10.927809 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:31:10.927825 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:31:10.927837 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:31:10.927849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:31:10.927862 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:31:10.927874 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:31:10.927886 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:31:10.927899 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:31:10.927911 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:31:10.927923 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:31:10.927938 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:31:10.927950 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:31:10.927962 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:31:10.927975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:31:10.927987 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:31:10.927999 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:31:10.928011 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:31:10.928024 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:31:10.928036 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:31:10.928051 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:31:10.928064 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:31:10.928076 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:31:10.928088 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:10.928100 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:31:10.928113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:31:10.928125 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:31:10.928181 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 15:31:10.928214 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:31:10.928227 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:10.928240 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:31:10.928253 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:31:10.928266 systemd-journald[194]: Journal started Feb 13 15:31:10.928292 systemd-journald[194]: Runtime Journal (/run/log/journal/eef3c74a60fb47c9b6fc352e71a8f0cc) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:31:10.916960 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:31:10.933243 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:31:10.935860 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:31:10.943675 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:31:10.944843 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:31:10.952353 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:31:10.952751 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:31:10.953916 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:31:10.958130 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:31:10.958807 kernel: Bridge firewalling registered Feb 13 15:31:10.959325 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:31:10.973203 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:31:10.981677 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:31:10.982034 dracut-cmdline[220]: dracut-dracut-053 Feb 13 15:31:10.985218 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:31:10.998708 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:31:11.006882 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:31:11.042471 systemd-resolved[254]: Positive Trust Anchors: Feb 13 15:31:11.042485 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:31:11.042515 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:31:11.045152 systemd-resolved[254]: Defaulting to hostname 'linux'. Feb 13 15:31:11.046427 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:31:11.052105 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:31:11.089761 kernel: SCSI subsystem initialized Feb 13 15:31:11.098751 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:31:11.109786 kernel: iscsi: registered transport (tcp) Feb 13 15:31:11.130906 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:31:11.130994 kernel: QLogic iSCSI HBA Driver Feb 13 15:31:11.186580 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:31:11.193979 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:31:11.224740 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:31:11.224796 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:31:11.224808 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:31:11.267771 kernel: raid6: avx2x4 gen() 29784 MB/s Feb 13 15:31:11.284754 kernel: raid6: avx2x2 gen() 31338 MB/s Feb 13 15:31:11.301833 kernel: raid6: avx2x1 gen() 25970 MB/s Feb 13 15:31:11.301867 kernel: raid6: using algorithm avx2x2 gen() 31338 MB/s Feb 13 15:31:11.319858 kernel: raid6: .... xor() 20009 MB/s, rmw enabled Feb 13 15:31:11.319886 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:31:11.339757 kernel: xor: automatically using best checksumming function avx Feb 13 15:31:11.493775 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:31:11.508693 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:31:11.526945 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:31:11.538322 systemd-udevd[413]: Using default interface naming scheme 'v255'. Feb 13 15:31:11.542802 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:31:11.543958 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:31:11.566592 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Feb 13 15:31:11.601212 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:31:11.606844 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:31:11.673801 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:31:11.684940 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:31:11.697452 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:31:11.700897 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:31:11.703719 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:31:11.706521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:31:11.711160 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:31:11.743850 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:31:11.743867 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:31:11.744034 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:31:11.744045 kernel: libata version 3.00 loaded. Feb 13 15:31:11.744063 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:31:11.744073 kernel: GPT:9289727 != 19775487 Feb 13 15:31:11.744084 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:31:11.744094 kernel: GPT:9289727 != 19775487 Feb 13 15:31:11.744103 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:31:11.744113 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:31:11.744124 kernel: AES CTR mode by8 optimization enabled Feb 13 15:31:11.722017 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:31:11.725556 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:31:11.725659 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:31:11.727167 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:31:11.728288 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:31:11.728408 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:11.729614 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:11.731965 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:11.733618 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:31:11.745565 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:31:11.745683 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:11.747989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:11.765101 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:31:11.794905 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:31:11.794922 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:31:11.795066 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:31:11.795214 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (468) Feb 13 15:31:11.795226 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) Feb 13 15:31:11.795237 kernel: scsi host0: ahci Feb 13 15:31:11.795380 kernel: scsi host1: ahci Feb 13 15:31:11.796150 kernel: scsi host2: ahci Feb 13 15:31:11.796347 kernel: scsi host3: ahci Feb 13 15:31:11.796499 kernel: scsi host4: ahci Feb 13 15:31:11.796650 kernel: scsi host5: ahci Feb 13 15:31:11.796809 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 15:31:11.796821 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 15:31:11.796844 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 15:31:11.796855 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 15:31:11.796865 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 15:31:11.796876 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 15:31:11.772499 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:11.786975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:31:11.796409 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:31:11.805250 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:31:11.815630 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:31:11.822354 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:31:11.828256 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:31:11.831503 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:31:11.845936 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:31:11.854940 disk-uuid[581]: Primary Header is updated. Feb 13 15:31:11.854940 disk-uuid[581]: Secondary Entries is updated. Feb 13 15:31:11.854940 disk-uuid[581]: Secondary Header is updated. Feb 13 15:31:11.858152 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:31:12.101761 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:31:12.101814 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:31:12.102759 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:31:12.103768 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:31:12.103848 kernel: ata3.00: applying bridge limits Feb 13 15:31:12.104752 kernel: ata3.00: configured for UDMA/100 Feb 13 15:31:12.106760 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:31:12.110758 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:31:12.110776 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:31:12.110786 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:31:12.152309 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:31:12.164403 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:31:12.164421 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:31:12.865757 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:31:12.865810 disk-uuid[582]: The operation has completed successfully. Feb 13 15:31:12.893167 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:31:12.893286 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:31:12.920871 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:31:12.926908 sh[598]: Success Feb 13 15:31:12.939757 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:31:12.975937 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:31:12.994424 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:31:12.997327 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:31:13.008526 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:31:13.008568 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:31:13.008579 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:31:13.009531 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:31:13.010274 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:31:13.014539 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:31:13.016954 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:31:13.030852 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:31:13.033451 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:31:13.042356 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:31:13.042385 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:31:13.042396 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:31:13.044921 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:31:13.053928 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:31:13.056028 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:31:13.064847 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:31:13.071870 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:31:13.125263 ignition[694]: Ignition 2.20.0 Feb 13 15:31:13.125279 ignition[694]: Stage: fetch-offline Feb 13 15:31:13.125319 ignition[694]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:13.125328 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:31:13.125413 ignition[694]: parsed url from cmdline: "" Feb 13 15:31:13.125417 ignition[694]: no config URL provided Feb 13 15:31:13.125422 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:31:13.125433 ignition[694]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:31:13.125460 ignition[694]: op(1): [started] loading QEMU firmware config module Feb 13 15:31:13.125466 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:31:13.134090 ignition[694]: op(1): [finished] loading QEMU firmware config module Feb 13 15:31:13.154434 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:31:13.168872 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:31:13.175331 ignition[694]: parsing config with SHA512: ab207fa553a2c387c5e47bbe5b583d98046d7f20684bc47892039e061a0dbba362c25d96dd20df491c034d1c9926062468838f0334dc9425186751de1dc20a1d Feb 13 15:31:13.180566 unknown[694]: fetched base config from "system" Feb 13 15:31:13.180578 unknown[694]: fetched user config from "qemu" Feb 13 15:31:13.182450 ignition[694]: fetch-offline: fetch-offline passed Feb 13 15:31:13.183289 ignition[694]: Ignition finished successfully Feb 13 15:31:13.185698 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:31:13.191659 systemd-networkd[786]: lo: Link UP Feb 13 15:31:13.191670 systemd-networkd[786]: lo: Gained carrier Feb 13 15:31:13.193223 systemd-networkd[786]: Enumeration completed Feb 13 15:31:13.193615 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:31:13.193619 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:31:13.194001 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:31:13.195812 systemd-networkd[786]: eth0: Link UP Feb 13 15:31:13.195816 systemd-networkd[786]: eth0: Gained carrier Feb 13 15:31:13.195822 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:31:13.198876 systemd[1]: Reached target network.target - Network. Feb 13 15:31:13.205026 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:31:13.212776 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:31:13.215865 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:31:13.231601 ignition[789]: Ignition 2.20.0 Feb 13 15:31:13.231613 ignition[789]: Stage: kargs Feb 13 15:31:13.231820 ignition[789]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:13.231834 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:31:13.232825 ignition[789]: kargs: kargs passed Feb 13 15:31:13.232877 ignition[789]: Ignition finished successfully Feb 13 15:31:13.239142 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:31:13.243891 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:31:13.257177 ignition[799]: Ignition 2.20.0 Feb 13 15:31:13.257188 ignition[799]: Stage: disks Feb 13 15:31:13.257364 ignition[799]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:13.257378 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:31:13.258493 ignition[799]: disks: disks passed Feb 13 15:31:13.258541 ignition[799]: Ignition finished successfully Feb 13 15:31:13.263838 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:31:13.264445 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:31:13.266147 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:31:13.266465 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:31:13.266970 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:31:13.267299 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:31:13.280892 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:31:13.292524 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:31:13.298939 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:31:13.311819 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:31:13.394763 kernel: EXT4-fs (vda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:31:13.395232 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:31:13.396207 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:31:13.406799 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:31:13.408566 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:31:13.409949 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:31:13.409995 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:31:13.417279 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (817) Feb 13 15:31:13.410018 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:31:13.420959 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:31:13.420976 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:31:13.420988 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:31:13.422757 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:31:13.424591 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:31:13.427393 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:31:13.430306 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:31:13.464548 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:31:13.469517 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:31:13.473992 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:31:13.478613 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:31:13.562928 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:31:13.575832 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:31:13.576856 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:31:13.586767 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:31:13.598370 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:31:13.610902 ignition[931]: INFO : Ignition 2.20.0 Feb 13 15:31:13.610902 ignition[931]: INFO : Stage: mount Feb 13 15:31:13.612614 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:13.612614 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:31:13.612614 ignition[931]: INFO : mount: mount passed Feb 13 15:31:13.612614 ignition[931]: INFO : Ignition finished successfully Feb 13 15:31:13.618010 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:31:13.627931 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:31:14.008513 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:31:14.020945 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:31:14.026753 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (943) Feb 13 15:31:14.028953 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:31:14.028971 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:31:14.028987 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:31:14.032751 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:31:14.033752 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:31:14.056970 ignition[961]: INFO : Ignition 2.20.0 Feb 13 15:31:14.056970 ignition[961]: INFO : Stage: files Feb 13 15:31:14.058821 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:14.058821 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:31:14.058821 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:31:14.062342 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:31:14.062342 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:31:14.062342 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:31:14.062342 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:31:14.062342 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:31:14.061878 unknown[961]: wrote ssh authorized keys file for user: core Feb 13 15:31:14.070491 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:31:14.070491 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:31:14.105934 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:31:14.226259 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:31:14.228483 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 15:31:14.375010 systemd-networkd[786]: eth0: Gained IPv6LL Feb 13 15:31:14.697663 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:31:15.038460 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:31:15.038460 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:31:15.042618 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:31:15.045018 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:31:15.045018 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:31:15.045018 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:31:15.049708 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:31:15.051901 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:31:15.051901 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:31:15.051901 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:31:15.072223 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:31:15.077196 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:31:15.078959 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:31:15.078959 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:31:15.078959 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:31:15.078959 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:31:15.078959 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:31:15.078959 ignition[961]: INFO : files: files passed Feb 13 15:31:15.078959 ignition[961]: INFO : Ignition finished successfully Feb 13 15:31:15.090177 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:31:15.099970 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:31:15.101103 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:31:15.108715 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:31:15.108894 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:31:15.113307 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:31:15.117372 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:31:15.117372 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:31:15.120810 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:31:15.123518 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:31:15.125391 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:31:15.134869 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:31:15.158964 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:31:15.159104 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:31:15.161355 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:31:15.163420 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:31:15.165411 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:31:15.175917 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:31:15.188272 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:31:15.200844 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:31:15.209307 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:31:15.210560 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:31:15.212764 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:31:15.214751 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:31:15.214858 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:31:15.216977 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:31:15.218678 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:31:15.220659 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:31:15.222671 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:31:15.224663 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:31:15.226813 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:31:15.228900 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:31:15.231168 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:31:15.233139 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:31:15.235318 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:31:15.237078 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:31:15.237184 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:31:15.239299 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:31:15.240891 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:31:15.242943 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:31:15.243065 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:31:15.245130 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:31:15.245234 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:31:15.247387 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:31:15.247491 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:31:15.249483 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:31:15.251186 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:31:15.251292 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:31:15.253711 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:31:15.255514 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:31:15.257431 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:31:15.257520 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:31:15.259384 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:31:15.259471 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:31:15.261444 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:31:15.261550 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:31:15.263470 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:31:15.263577 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:31:15.273884 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:31:15.276000 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:31:15.277073 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:31:15.277191 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:31:15.279332 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:31:15.286605 ignition[1016]: INFO : Ignition 2.20.0 Feb 13 15:31:15.286605 ignition[1016]: INFO : Stage: umount Feb 13 15:31:15.286605 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:31:15.286605 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:31:15.279504 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:31:15.295326 ignition[1016]: INFO : umount: umount passed Feb 13 15:31:15.295326 ignition[1016]: INFO : Ignition finished successfully Feb 13 15:31:15.284362 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:31:15.284556 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:31:15.288899 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:31:15.289018 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:31:15.290826 systemd[1]: Stopped target network.target - Network. Feb 13 15:31:15.292820 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:31:15.292880 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:31:15.295339 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:31:15.295386 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:31:15.297500 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:31:15.297545 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:31:15.299794 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:31:15.299843 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:31:15.302532 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:31:15.305201 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:31:15.308204 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:31:15.312762 systemd-networkd[786]: eth0: DHCPv6 lease lost Feb 13 15:31:15.314376 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:31:15.314551 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:31:15.316362 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:31:15.316497 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:31:15.319539 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:31:15.319596 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:31:15.335873 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:31:15.337813 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:31:15.337890 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:31:15.341511 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:31:15.341568 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:31:15.344515 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:31:15.344566 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:31:15.347640 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:31:15.348654 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:31:15.351180 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:31:15.365026 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:31:15.366073 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:31:15.370540 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:31:15.371599 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:31:15.374387 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:31:15.374448 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:31:15.376586 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:31:15.376622 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:31:15.378579 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:31:15.378627 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:31:15.380873 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:31:15.380920 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:31:15.382699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:31:15.382760 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:31:15.392900 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:31:15.394020 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:31:15.394092 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:31:15.396469 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:31:15.396518 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:31:15.398682 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:31:15.398743 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:31:15.401175 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:31:15.401222 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:15.403703 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:31:15.403833 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:31:15.462402 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:31:15.463422 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:31:15.465762 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:31:15.467783 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:31:15.468773 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:31:15.484853 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:31:15.491455 systemd[1]: Switching root. Feb 13 15:31:15.523894 systemd-journald[194]: Journal stopped Feb 13 15:31:16.586296 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 15:31:16.586374 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:31:16.586387 kernel: SELinux: policy capability open_perms=1 Feb 13 15:31:16.586399 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:31:16.586410 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:31:16.586421 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:31:16.586435 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:31:16.586447 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:31:16.586464 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:31:16.586475 kernel: audit: type=1403 audit(1739460675.869:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:31:16.586487 systemd[1]: Successfully loaded SELinux policy in 39.608ms. Feb 13 15:31:16.586510 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.976ms. Feb 13 15:31:16.586523 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:31:16.586536 systemd[1]: Detected virtualization kvm. Feb 13 15:31:16.586548 systemd[1]: Detected architecture x86-64. Feb 13 15:31:16.586562 systemd[1]: Detected first boot. Feb 13 15:31:16.586574 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:31:16.586586 zram_generator::config[1060]: No configuration found. Feb 13 15:31:16.586598 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:31:16.586610 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:31:16.586622 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:31:16.586634 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:31:16.586646 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:31:16.586661 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:31:16.586673 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:31:16.586685 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:31:16.586697 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:31:16.586709 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:31:16.586722 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:31:16.586748 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:31:16.586760 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:31:16.586775 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:31:16.586787 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:31:16.586799 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:31:16.586811 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:31:16.586823 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:31:16.586835 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:31:16.586847 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:31:16.586859 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:31:16.586871 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:31:16.586885 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:31:16.586897 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:31:16.586909 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:31:16.586921 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:31:16.586933 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:31:16.586945 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:31:16.586957 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:31:16.586968 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:31:16.586983 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:31:16.586996 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:31:16.587008 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:31:16.587019 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:31:16.587043 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:31:16.587055 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:31:16.587068 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:31:16.587080 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:16.587092 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:31:16.587106 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:31:16.587118 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:31:16.587131 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:31:16.587143 systemd[1]: Reached target machines.target - Containers. Feb 13 15:31:16.587155 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:31:16.587166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:31:16.587178 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:31:16.587190 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:31:16.587204 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:31:16.587216 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:31:16.587228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:31:16.587240 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:31:16.587252 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:31:16.587265 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:31:16.587278 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:31:16.587290 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:31:16.587301 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:31:16.587316 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:31:16.587328 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:31:16.587340 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:31:16.587352 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:31:16.587363 kernel: fuse: init (API version 7.39) Feb 13 15:31:16.587375 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:31:16.587387 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:31:16.587399 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:31:16.587410 systemd[1]: Stopped verity-setup.service. Feb 13 15:31:16.587425 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:16.587437 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:31:16.587449 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:31:16.587461 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:31:16.587475 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:31:16.587487 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:31:16.587499 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:31:16.587527 systemd-journald[1130]: Collecting audit messages is disabled. Feb 13 15:31:16.587551 kernel: loop: module loaded Feb 13 15:31:16.587562 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:31:16.587574 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:31:16.587587 systemd-journald[1130]: Journal started Feb 13 15:31:16.587611 systemd-journald[1130]: Runtime Journal (/run/log/journal/eef3c74a60fb47c9b6fc352e71a8f0cc) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:31:16.372947 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:31:16.386861 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:31:16.387280 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:31:16.589496 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:31:16.594310 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:31:16.594345 kernel: ACPI: bus type drm_connector registered Feb 13 15:31:16.596146 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:31:16.596340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:31:16.597949 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:31:16.598207 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:31:16.599796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:31:16.599964 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:31:16.601842 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:31:16.602014 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:31:16.603522 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:31:16.605006 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:31:16.605188 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:31:16.606586 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:31:16.608018 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:31:16.609678 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:31:16.625463 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:31:16.632819 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:31:16.635130 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:31:16.636286 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:31:16.636318 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:31:16.638311 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:31:16.640627 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:31:16.642912 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:31:16.644116 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:31:16.647566 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:31:16.650858 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:31:16.652198 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:31:16.654788 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:31:16.656163 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:31:16.657185 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:31:16.663156 systemd-journald[1130]: Time spent on flushing to /var/log/journal/eef3c74a60fb47c9b6fc352e71a8f0cc is 13.666ms for 1040 entries. Feb 13 15:31:16.663156 systemd-journald[1130]: System Journal (/var/log/journal/eef3c74a60fb47c9b6fc352e71a8f0cc) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:31:16.703774 systemd-journald[1130]: Received client request to flush runtime journal. Feb 13 15:31:16.703806 kernel: loop0: detected capacity change from 0 to 210664 Feb 13 15:31:16.661957 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:31:16.665188 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:31:16.669466 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:31:16.672963 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:31:16.674506 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:31:16.686171 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:31:16.687652 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:31:16.692882 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:31:16.697274 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:31:16.705859 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:31:16.707512 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:31:16.712914 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 15:31:16.712932 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 15:31:16.714118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:31:16.719305 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:31:16.729803 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:31:16.734064 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:31:16.735998 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:31:16.736785 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:31:16.739344 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:31:16.757774 kernel: loop1: detected capacity change from 0 to 140992 Feb 13 15:31:16.762357 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:31:16.770196 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:31:16.786152 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Feb 13 15:31:16.786173 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Feb 13 15:31:16.791401 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:31:16.795755 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 15:31:16.828756 kernel: loop3: detected capacity change from 0 to 210664 Feb 13 15:31:16.836747 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 15:31:16.849755 kernel: loop5: detected capacity change from 0 to 138184 Feb 13 15:31:16.860897 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:31:16.862164 (sd-merge)[1204]: Merged extensions into '/usr'. Feb 13 15:31:16.865948 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:31:16.865964 systemd[1]: Reloading... Feb 13 15:31:16.918758 zram_generator::config[1229]: No configuration found. Feb 13 15:31:16.959510 ldconfig[1169]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:31:17.043353 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:31:17.092118 systemd[1]: Reloading finished in 225 ms. Feb 13 15:31:17.131004 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:31:17.132562 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:31:17.144893 systemd[1]: Starting ensure-sysext.service... Feb 13 15:31:17.146791 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:31:17.155442 systemd[1]: Reloading requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:31:17.155462 systemd[1]: Reloading... Feb 13 15:31:17.169812 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:31:17.170181 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:31:17.171161 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:31:17.171452 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Feb 13 15:31:17.171527 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Feb 13 15:31:17.175528 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:31:17.175621 systemd-tmpfiles[1268]: Skipping /boot Feb 13 15:31:17.188664 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:31:17.188677 systemd-tmpfiles[1268]: Skipping /boot Feb 13 15:31:17.203773 zram_generator::config[1298]: No configuration found. Feb 13 15:31:17.309773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:31:17.358607 systemd[1]: Reloading finished in 202 ms. Feb 13 15:31:17.377857 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:31:17.386157 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:31:17.394910 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:31:17.397210 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:31:17.399522 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:31:17.403766 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:31:17.410922 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:31:17.414537 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:31:17.419451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:17.419614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:31:17.421235 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:31:17.424552 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:31:17.427953 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:31:17.429075 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:31:17.431990 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:31:17.433128 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:17.437212 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:17.437415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:31:17.437614 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:31:17.437758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:17.441885 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:31:17.443798 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:31:17.444088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:31:17.445739 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Feb 13 15:31:17.446203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:31:17.446435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:31:17.457182 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:31:17.459281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:31:17.459558 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:31:17.464810 systemd[1]: Finished ensure-sysext.service. Feb 13 15:31:17.467806 augenrules[1368]: No rules Feb 13 15:31:17.467998 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:17.468185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:31:17.475958 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:31:17.477901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:31:17.477954 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:31:17.478019 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:31:17.480945 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:31:17.484288 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:31:17.485895 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:31:17.486138 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:31:17.487677 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:31:17.489112 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:31:17.489801 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:31:17.491182 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:31:17.491537 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:31:17.493013 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:31:17.513894 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:31:17.515027 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:31:17.525021 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:31:17.530162 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:31:17.532776 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1391) Feb 13 15:31:17.580330 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:31:17.587885 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:31:17.598455 systemd-resolved[1337]: Positive Trust Anchors: Feb 13 15:31:17.600947 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:31:17.600987 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:31:17.606547 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:31:17.607983 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:31:17.608533 systemd-resolved[1337]: Defaulting to hostname 'linux'. Feb 13 15:31:17.611428 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:31:17.613078 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:31:17.615745 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:31:17.618897 systemd-networkd[1403]: lo: Link UP Feb 13 15:31:17.618910 systemd-networkd[1403]: lo: Gained carrier Feb 13 15:31:17.621293 systemd-networkd[1403]: Enumeration completed Feb 13 15:31:17.621412 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:31:17.621770 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:31:17.621776 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:31:17.622822 systemd[1]: Reached target network.target - Network. Feb 13 15:31:17.623872 systemd-networkd[1403]: eth0: Link UP Feb 13 15:31:17.623884 systemd-networkd[1403]: eth0: Gained carrier Feb 13 15:31:17.623907 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:31:17.628911 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 15:31:17.631254 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:31:17.632891 systemd-networkd[1403]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:31:17.633783 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:31:17.635768 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 15:31:17.639897 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:31:17.640111 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:31:17.640331 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:31:17.640491 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 15:31:17.638880 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Feb 13 15:31:19.323923 systemd-timesyncd[1378]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:31:19.323969 systemd-timesyncd[1378]: Initial clock synchronization to Thu 2025-02-13 15:31:19.323812 UTC. Feb 13 15:31:19.324081 systemd-resolved[1337]: Clock change detected. Flushing caches. Feb 13 15:31:19.369034 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:31:19.370299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:19.381006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:31:19.381247 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:19.429145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:19.445249 kernel: kvm_amd: TSC scaling supported Feb 13 15:31:19.445289 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:31:19.445302 kernel: kvm_amd: Nested Paging enabled Feb 13 15:31:19.446222 kernel: kvm_amd: LBR virtualization supported Feb 13 15:31:19.446237 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:31:19.446921 kernel: kvm_amd: Virtual GIF supported Feb 13 15:31:19.466946 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:31:19.490000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:19.506935 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:31:19.522043 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:31:19.530261 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:31:19.560096 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:31:19.561627 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:31:19.562722 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:31:19.563868 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:31:19.565101 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:31:19.566519 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:31:19.567676 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:31:19.568899 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:31:19.570120 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:31:19.570148 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:31:19.571066 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:31:19.572690 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:31:19.575466 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:31:19.585272 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:31:19.587507 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:31:19.589028 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:31:19.590142 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:31:19.591093 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:31:19.592029 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:31:19.592051 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:31:19.592964 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:31:19.594962 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:31:19.597972 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:31:19.598838 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:31:19.602412 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:31:19.603440 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:31:19.605766 jq[1442]: false Feb 13 15:31:19.605849 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:31:19.609025 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:31:19.612140 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:31:19.616100 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:31:19.622043 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:31:19.623872 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:31:19.624336 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:31:19.629753 extend-filesystems[1443]: Found loop3 Feb 13 15:31:19.632279 extend-filesystems[1443]: Found loop4 Feb 13 15:31:19.632279 extend-filesystems[1443]: Found loop5 Feb 13 15:31:19.632279 extend-filesystems[1443]: Found sr0 Feb 13 15:31:19.632279 extend-filesystems[1443]: Found vda Feb 13 15:31:19.632279 extend-filesystems[1443]: Found vda1 Feb 13 15:31:19.632279 extend-filesystems[1443]: Found vda2 Feb 13 15:31:19.632279 extend-filesystems[1443]: Found vda3 Feb 13 15:31:19.632279 extend-filesystems[1443]: Found usr Feb 13 15:31:19.632279 extend-filesystems[1443]: Found vda4 Feb 13 15:31:19.632279 extend-filesystems[1443]: Found vda6 Feb 13 15:31:19.632279 extend-filesystems[1443]: Found vda7 Feb 13 15:31:19.632279 extend-filesystems[1443]: Found vda9 Feb 13 15:31:19.632279 extend-filesystems[1443]: Checking size of /dev/vda9 Feb 13 15:31:19.631870 dbus-daemon[1441]: [system] SELinux support is enabled Feb 13 15:31:19.631058 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:31:19.637479 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:31:19.643796 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:31:19.646384 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:31:19.649068 jq[1458]: true Feb 13 15:31:19.653929 update_engine[1452]: I20250213 15:31:19.652623 1452 main.cc:92] Flatcar Update Engine starting Feb 13 15:31:19.659917 update_engine[1452]: I20250213 15:31:19.655494 1452 update_check_scheduler.cc:74] Next update check in 8m38s Feb 13 15:31:19.659941 extend-filesystems[1443]: Resized partition /dev/vda9 Feb 13 15:31:19.661514 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:31:19.661242 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:31:19.661460 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:31:19.663505 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:31:19.663707 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:31:19.665941 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1382) Feb 13 15:31:19.668942 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:31:19.672042 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:31:19.672344 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:31:19.696321 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:31:19.701940 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:31:19.703398 jq[1467]: true Feb 13 15:31:19.712261 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:31:19.721201 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:31:19.721224 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:31:19.722755 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:31:19.722782 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:31:19.722827 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:31:19.722847 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:31:19.724514 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:31:19.724514 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:31:19.724514 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:31:19.730193 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Feb 13 15:31:19.724929 systemd-logind[1451]: New seat seat0. Feb 13 15:31:19.736090 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:31:19.738154 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:31:19.739620 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:31:19.739863 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:31:19.745621 tar[1466]: linux-amd64/helm Feb 13 15:31:19.760463 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:31:19.767640 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:31:19.768847 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:31:19.771935 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:31:19.790233 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:31:19.815171 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:31:19.838596 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:31:19.851659 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:31:19.854134 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:48564.service - OpenSSH per-connection server daemon (10.0.0.1:48564). Feb 13 15:31:19.860934 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:31:19.861742 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:31:19.868479 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:31:19.881573 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:31:19.886241 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:31:19.889174 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:31:19.891083 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:31:19.901926 containerd[1468]: time="2025-02-13T15:31:19.901768669Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:31:19.920966 sshd[1519]: Accepted publickey for core from 10.0.0.1 port 48564 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:31:19.924010 containerd[1468]: time="2025-02-13T15:31:19.923966117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:19.924334 sshd-session[1519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:19.926204 containerd[1468]: time="2025-02-13T15:31:19.926169119Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:31:19.926278 containerd[1468]: time="2025-02-13T15:31:19.926262614Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:31:19.926329 containerd[1468]: time="2025-02-13T15:31:19.926317126Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:31:19.926532 containerd[1468]: time="2025-02-13T15:31:19.926516220Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:31:19.926585 containerd[1468]: time="2025-02-13T15:31:19.926573697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:19.926689 containerd[1468]: time="2025-02-13T15:31:19.926673485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:31:19.927335 containerd[1468]: time="2025-02-13T15:31:19.926726564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:19.927335 containerd[1468]: time="2025-02-13T15:31:19.926941587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:31:19.927335 containerd[1468]: time="2025-02-13T15:31:19.926956525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:19.927335 containerd[1468]: time="2025-02-13T15:31:19.926969680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:31:19.927335 containerd[1468]: time="2025-02-13T15:31:19.926979458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:19.927335 containerd[1468]: time="2025-02-13T15:31:19.927069638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:19.927335 containerd[1468]: time="2025-02-13T15:31:19.927300150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:31:19.927510 containerd[1468]: time="2025-02-13T15:31:19.927423191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:31:19.927510 containerd[1468]: time="2025-02-13T15:31:19.927435904Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:31:19.927550 containerd[1468]: time="2025-02-13T15:31:19.927526985Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:31:19.927607 containerd[1468]: time="2025-02-13T15:31:19.927580385Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:31:19.932365 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:31:19.934958 containerd[1468]: time="2025-02-13T15:31:19.934862702Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:31:19.934958 containerd[1468]: time="2025-02-13T15:31:19.934924558Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:31:19.934958 containerd[1468]: time="2025-02-13T15:31:19.934942652Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:31:19.934958 containerd[1468]: time="2025-02-13T15:31:19.934958462Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:31:19.935074 containerd[1468]: time="2025-02-13T15:31:19.934972498Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:31:19.935131 containerd[1468]: time="2025-02-13T15:31:19.935089838Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:31:19.935416 containerd[1468]: time="2025-02-13T15:31:19.935387296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:31:19.935527 containerd[1468]: time="2025-02-13T15:31:19.935503905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:31:19.935527 containerd[1468]: time="2025-02-13T15:31:19.935524003Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:31:19.935566 containerd[1468]: time="2025-02-13T15:31:19.935539171Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:31:19.935566 containerd[1468]: time="2025-02-13T15:31:19.935553538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:31:19.935602 containerd[1468]: time="2025-02-13T15:31:19.935566472Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:31:19.935602 containerd[1468]: time="2025-02-13T15:31:19.935578364Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:31:19.935602 containerd[1468]: time="2025-02-13T15:31:19.935591699Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:31:19.935660 containerd[1468]: time="2025-02-13T15:31:19.935604944Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:31:19.935660 containerd[1468]: time="2025-02-13T15:31:19.935617808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:31:19.935660 containerd[1468]: time="2025-02-13T15:31:19.935629190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:31:19.935660 containerd[1468]: time="2025-02-13T15:31:19.935640090Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:31:19.935660 containerd[1468]: time="2025-02-13T15:31:19.935658875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935743 containerd[1468]: time="2025-02-13T15:31:19.935672351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935743 containerd[1468]: time="2025-02-13T15:31:19.935702517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935743 containerd[1468]: time="2025-02-13T15:31:19.935714600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935743 containerd[1468]: time="2025-02-13T15:31:19.935726031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935743 containerd[1468]: time="2025-02-13T15:31:19.935738775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935840 containerd[1468]: time="2025-02-13T15:31:19.935749355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935840 containerd[1468]: time="2025-02-13T15:31:19.935761879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935840 containerd[1468]: time="2025-02-13T15:31:19.935775424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935840 containerd[1468]: time="2025-02-13T15:31:19.935790232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935840 containerd[1468]: time="2025-02-13T15:31:19.935802004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935840 containerd[1468]: time="2025-02-13T15:31:19.935812433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935840 containerd[1468]: time="2025-02-13T15:31:19.935823394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935840 containerd[1468]: time="2025-02-13T15:31:19.935837210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:31:19.935997 containerd[1468]: time="2025-02-13T15:31:19.935855675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935997 containerd[1468]: time="2025-02-13T15:31:19.935876834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.935997 containerd[1468]: time="2025-02-13T15:31:19.935887033Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:31:19.935997 containerd[1468]: time="2025-02-13T15:31:19.935968847Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:31:19.935997 containerd[1468]: time="2025-02-13T15:31:19.935985909Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:31:19.935997 containerd[1468]: time="2025-02-13T15:31:19.935995136Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:31:19.936106 containerd[1468]: time="2025-02-13T15:31:19.936005626Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:31:19.936106 containerd[1468]: time="2025-02-13T15:31:19.936083081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.936106 containerd[1468]: time="2025-02-13T15:31:19.936099121Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:31:19.936166 containerd[1468]: time="2025-02-13T15:31:19.936110352Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:31:19.936166 containerd[1468]: time="2025-02-13T15:31:19.936124479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:31:19.936417 containerd[1468]: time="2025-02-13T15:31:19.936369769Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:31:19.936417 containerd[1468]: time="2025-02-13T15:31:19.936414703Z" level=info msg="Connect containerd service" Feb 13 15:31:19.936556 containerd[1468]: time="2025-02-13T15:31:19.936438958Z" level=info msg="using legacy CRI server" Feb 13 15:31:19.936556 containerd[1468]: time="2025-02-13T15:31:19.936445661Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:31:19.936593 containerd[1468]: time="2025-02-13T15:31:19.936566928Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:31:19.937137 containerd[1468]: time="2025-02-13T15:31:19.937100339Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:31:19.937550 containerd[1468]: time="2025-02-13T15:31:19.937297178Z" level=info msg="Start subscribing containerd event" Feb 13 15:31:19.937550 containerd[1468]: time="2025-02-13T15:31:19.937360787Z" level=info msg="Start recovering state" Feb 13 15:31:19.937550 containerd[1468]: time="2025-02-13T15:31:19.937436710Z" level=info msg="Start event monitor" Feb 13 15:31:19.937550 containerd[1468]: time="2025-02-13T15:31:19.937455204Z" level=info msg="Start snapshots syncer" Feb 13 15:31:19.937550 containerd[1468]: time="2025-02-13T15:31:19.937466756Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:31:19.937550 containerd[1468]: time="2025-02-13T15:31:19.937474791Z" level=info msg="Start streaming server" Feb 13 15:31:19.937673 containerd[1468]: time="2025-02-13T15:31:19.937644549Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:31:19.937776 containerd[1468]: time="2025-02-13T15:31:19.937694854Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:31:19.940068 containerd[1468]: time="2025-02-13T15:31:19.940027919Z" level=info msg="containerd successfully booted in 0.044313s" Feb 13 15:31:19.941145 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:31:19.942492 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:31:19.945681 systemd-logind[1451]: New session 1 of user core. Feb 13 15:31:19.954439 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:31:19.962142 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:31:19.966517 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:31:20.076366 systemd[1535]: Queued start job for default target default.target. Feb 13 15:31:20.085180 systemd[1535]: Created slice app.slice - User Application Slice. Feb 13 15:31:20.085206 systemd[1535]: Reached target paths.target - Paths. Feb 13 15:31:20.085219 systemd[1535]: Reached target timers.target - Timers. Feb 13 15:31:20.086710 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:31:20.098077 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:31:20.098199 systemd[1535]: Reached target sockets.target - Sockets. Feb 13 15:31:20.098213 systemd[1535]: Reached target basic.target - Basic System. Feb 13 15:31:20.098248 systemd[1535]: Reached target default.target - Main User Target. Feb 13 15:31:20.098278 systemd[1535]: Startup finished in 125ms. Feb 13 15:31:20.098926 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:31:20.111165 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:31:20.128960 tar[1466]: linux-amd64/LICENSE Feb 13 15:31:20.129034 tar[1466]: linux-amd64/README.md Feb 13 15:31:20.156641 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:31:20.164857 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:48570.service - OpenSSH per-connection server daemon (10.0.0.1:48570). Feb 13 15:31:20.205370 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 48570 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:31:20.207603 sshd-session[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:20.212014 systemd-logind[1451]: New session 2 of user core. Feb 13 15:31:20.224030 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:31:20.279555 sshd[1551]: Connection closed by 10.0.0.1 port 48570 Feb 13 15:31:20.280078 sshd-session[1549]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:20.296232 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:48570.service: Deactivated successfully. Feb 13 15:31:20.298143 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:31:20.299869 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:31:20.309249 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:48574.service - OpenSSH per-connection server daemon (10.0.0.1:48574). Feb 13 15:31:20.311485 systemd-logind[1451]: Removed session 2. Feb 13 15:31:20.341921 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 48574 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:31:20.343318 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:20.346970 systemd-logind[1451]: New session 3 of user core. Feb 13 15:31:20.365037 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:31:20.420030 sshd[1558]: Connection closed by 10.0.0.1 port 48574 Feb 13 15:31:20.420377 sshd-session[1556]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:20.424197 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:48574.service: Deactivated successfully. Feb 13 15:31:20.426110 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:31:20.426664 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:31:20.427513 systemd-logind[1451]: Removed session 3. Feb 13 15:31:20.794065 systemd-networkd[1403]: eth0: Gained IPv6LL Feb 13 15:31:20.797468 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:31:20.799213 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:31:20.811096 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:31:20.813788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:20.816247 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:31:20.833516 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:31:20.833797 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:31:20.835649 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:31:20.844370 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:31:21.443289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:21.445091 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:31:21.446444 systemd[1]: Startup finished in 770ms (kernel) + 5.160s (initrd) + 3.932s (userspace) = 9.862s. Feb 13 15:31:21.451102 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:21.878829 kubelet[1584]: E0213 15:31:21.878517 1584 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:21.882709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:21.882921 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:30.430472 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:56804.service - OpenSSH per-connection server daemon (10.0.0.1:56804). Feb 13 15:31:30.472214 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 56804 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:31:30.473764 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:30.477654 systemd-logind[1451]: New session 4 of user core. Feb 13 15:31:30.497119 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:31:30.550367 sshd[1600]: Connection closed by 10.0.0.1 port 56804 Feb 13 15:31:30.550722 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:30.562234 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:56804.service: Deactivated successfully. Feb 13 15:31:30.563811 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:31:30.565171 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:31:30.574157 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:56820.service - OpenSSH per-connection server daemon (10.0.0.1:56820). Feb 13 15:31:30.574994 systemd-logind[1451]: Removed session 4. Feb 13 15:31:30.607932 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 56820 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:31:30.609546 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:30.613414 systemd-logind[1451]: New session 5 of user core. Feb 13 15:31:30.623999 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:31:30.672639 sshd[1607]: Connection closed by 10.0.0.1 port 56820 Feb 13 15:31:30.673237 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:30.690971 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:56820.service: Deactivated successfully. Feb 13 15:31:30.693024 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:31:30.694667 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:31:30.704183 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:56832.service - OpenSSH per-connection server daemon (10.0.0.1:56832). Feb 13 15:31:30.705086 systemd-logind[1451]: Removed session 5. Feb 13 15:31:30.739649 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 56832 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:31:30.741557 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:30.745997 systemd-logind[1451]: New session 6 of user core. Feb 13 15:31:30.758133 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:31:30.814064 sshd[1614]: Connection closed by 10.0.0.1 port 56832 Feb 13 15:31:30.814439 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:30.834989 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:56832.service: Deactivated successfully. Feb 13 15:31:30.837023 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:31:30.838854 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:31:30.850181 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:56844.service - OpenSSH per-connection server daemon (10.0.0.1:56844). Feb 13 15:31:30.851099 systemd-logind[1451]: Removed session 6. Feb 13 15:31:30.883887 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 56844 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:31:30.885412 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:30.889431 systemd-logind[1451]: New session 7 of user core. Feb 13 15:31:30.899038 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:31:30.959296 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:31:30.959632 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:31:30.975060 sudo[1622]: pam_unix(sudo:session): session closed for user root Feb 13 15:31:30.976598 sshd[1621]: Connection closed by 10.0.0.1 port 56844 Feb 13 15:31:30.977015 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:30.987636 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:56844.service: Deactivated successfully. Feb 13 15:31:30.989398 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:31:30.991033 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:31:31.000114 systemd[1]: Started sshd@7-10.0.0.113:22-10.0.0.1:56852.service - OpenSSH per-connection server daemon (10.0.0.1:56852). Feb 13 15:31:31.000821 systemd-logind[1451]: Removed session 7. Feb 13 15:31:31.033925 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 56852 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:31:31.035481 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:31.039167 systemd-logind[1451]: New session 8 of user core. Feb 13 15:31:31.049086 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:31:31.102410 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:31:31.102740 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:31:31.106250 sudo[1631]: pam_unix(sudo:session): session closed for user root Feb 13 15:31:31.112335 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:31:31.112668 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:31:31.131203 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:31:31.160498 augenrules[1653]: No rules Feb 13 15:31:31.162354 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:31:31.162594 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:31:31.163935 sudo[1630]: pam_unix(sudo:session): session closed for user root Feb 13 15:31:31.165400 sshd[1629]: Connection closed by 10.0.0.1 port 56852 Feb 13 15:31:31.165700 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:31.177504 systemd[1]: sshd@7-10.0.0.113:22-10.0.0.1:56852.service: Deactivated successfully. Feb 13 15:31:31.179233 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:31:31.180794 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:31:31.181995 systemd[1]: Started sshd@8-10.0.0.113:22-10.0.0.1:56862.service - OpenSSH per-connection server daemon (10.0.0.1:56862). Feb 13 15:31:31.182739 systemd-logind[1451]: Removed session 8. Feb 13 15:31:31.220236 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 56862 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:31:31.221697 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:31.225501 systemd-logind[1451]: New session 9 of user core. Feb 13 15:31:31.232004 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:31:31.284335 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:31:31.284674 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:31:31.839148 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:31:31.839302 (dockerd)[1684]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:31:32.133455 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:31:32.145145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:32.362208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:32.368010 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:32.414484 dockerd[1684]: time="2025-02-13T15:31:32.413374957Z" level=info msg="Starting up" Feb 13 15:31:32.457507 kubelet[1699]: E0213 15:31:32.457446 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:32.464965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:32.465183 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:32.773102 dockerd[1684]: time="2025-02-13T15:31:32.772961701Z" level=info msg="Loading containers: start." Feb 13 15:31:32.940944 kernel: Initializing XFRM netlink socket Feb 13 15:31:33.024179 systemd-networkd[1403]: docker0: Link UP Feb 13 15:31:33.068435 dockerd[1684]: time="2025-02-13T15:31:33.068383342Z" level=info msg="Loading containers: done." Feb 13 15:31:33.084132 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2141995466-merged.mount: Deactivated successfully. Feb 13 15:31:33.086575 dockerd[1684]: time="2025-02-13T15:31:33.086535153Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:31:33.086658 dockerd[1684]: time="2025-02-13T15:31:33.086630702Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:31:33.086767 dockerd[1684]: time="2025-02-13T15:31:33.086743393Z" level=info msg="Daemon has completed initialization" Feb 13 15:31:33.125283 dockerd[1684]: time="2025-02-13T15:31:33.125218197Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:31:33.125437 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:31:33.842577 containerd[1468]: time="2025-02-13T15:31:33.842531135Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:31:34.412155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441404905.mount: Deactivated successfully. Feb 13 15:31:35.561989 containerd[1468]: time="2025-02-13T15:31:35.561932446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:35.562787 containerd[1468]: time="2025-02-13T15:31:35.562703683Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 15:31:35.563937 containerd[1468]: time="2025-02-13T15:31:35.563889176Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:35.568422 containerd[1468]: time="2025-02-13T15:31:35.568378625Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 1.725804319s" Feb 13 15:31:35.568482 containerd[1468]: time="2025-02-13T15:31:35.568427126Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 15:31:35.568886 containerd[1468]: time="2025-02-13T15:31:35.568862653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:35.589009 containerd[1468]: time="2025-02-13T15:31:35.588967507Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:31:37.569021 containerd[1468]: time="2025-02-13T15:31:37.568949720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:37.569629 containerd[1468]: time="2025-02-13T15:31:37.569578880Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 15:31:37.570798 containerd[1468]: time="2025-02-13T15:31:37.570743945Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:37.573979 containerd[1468]: time="2025-02-13T15:31:37.573938376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:37.574893 containerd[1468]: time="2025-02-13T15:31:37.574853151Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.985849907s" Feb 13 15:31:37.574893 containerd[1468]: time="2025-02-13T15:31:37.574885762Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 15:31:37.597920 containerd[1468]: time="2025-02-13T15:31:37.597821635Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:31:38.494813 containerd[1468]: time="2025-02-13T15:31:38.494757420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:38.495504 containerd[1468]: time="2025-02-13T15:31:38.495437916Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 15:31:38.496392 containerd[1468]: time="2025-02-13T15:31:38.496361168Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:38.499201 containerd[1468]: time="2025-02-13T15:31:38.499167200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:38.500166 containerd[1468]: time="2025-02-13T15:31:38.500135255Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 902.272594ms" Feb 13 15:31:38.500166 containerd[1468]: time="2025-02-13T15:31:38.500160613Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 15:31:38.521128 containerd[1468]: time="2025-02-13T15:31:38.521099260Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:31:39.531615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount6567479.mount: Deactivated successfully. Feb 13 15:31:40.179461 containerd[1468]: time="2025-02-13T15:31:40.179397010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:40.180210 containerd[1468]: time="2025-02-13T15:31:40.180135706Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 15:31:40.181331 containerd[1468]: time="2025-02-13T15:31:40.181298056Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:40.183256 containerd[1468]: time="2025-02-13T15:31:40.183223957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:40.183866 containerd[1468]: time="2025-02-13T15:31:40.183820877Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.662690659s" Feb 13 15:31:40.183897 containerd[1468]: time="2025-02-13T15:31:40.183866593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 15:31:40.207800 containerd[1468]: time="2025-02-13T15:31:40.207752868Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:31:40.691034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271486343.mount: Deactivated successfully. Feb 13 15:31:41.449157 containerd[1468]: time="2025-02-13T15:31:41.449089357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:41.449880 containerd[1468]: time="2025-02-13T15:31:41.449821871Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:31:41.451033 containerd[1468]: time="2025-02-13T15:31:41.450995211Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:41.456509 containerd[1468]: time="2025-02-13T15:31:41.456451493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:41.457534 containerd[1468]: time="2025-02-13T15:31:41.457493888Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.249701416s" Feb 13 15:31:41.457599 containerd[1468]: time="2025-02-13T15:31:41.457533282Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:31:41.481968 containerd[1468]: time="2025-02-13T15:31:41.481892214Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:31:41.964019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911834326.mount: Deactivated successfully. Feb 13 15:31:41.971298 containerd[1468]: time="2025-02-13T15:31:41.971218876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:41.971965 containerd[1468]: time="2025-02-13T15:31:41.971924569Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:31:41.973181 containerd[1468]: time="2025-02-13T15:31:41.973157772Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:41.975472 containerd[1468]: time="2025-02-13T15:31:41.975397082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:41.976113 containerd[1468]: time="2025-02-13T15:31:41.976085102Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 493.972896ms" Feb 13 15:31:41.976164 containerd[1468]: time="2025-02-13T15:31:41.976116210Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:31:41.998500 containerd[1468]: time="2025-02-13T15:31:41.998425388Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:31:42.467340 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:31:42.475071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:42.623399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:42.627548 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:42.708429 kubelet[2070]: E0213 15:31:42.708360 2070 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:42.711969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:42.712210 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:42.784474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135470753.mount: Deactivated successfully. Feb 13 15:31:45.218626 containerd[1468]: time="2025-02-13T15:31:45.218550151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:45.219345 containerd[1468]: time="2025-02-13T15:31:45.219299807Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 15:31:45.220592 containerd[1468]: time="2025-02-13T15:31:45.220563808Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:45.223935 containerd[1468]: time="2025-02-13T15:31:45.223866141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:45.226549 containerd[1468]: time="2025-02-13T15:31:45.226301718Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.227829212s" Feb 13 15:31:45.226549 containerd[1468]: time="2025-02-13T15:31:45.226335191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 15:31:47.534335 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:47.544156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:47.561482 systemd[1]: Reloading requested from client PID 2207 ('systemctl') (unit session-9.scope)... Feb 13 15:31:47.561504 systemd[1]: Reloading... Feb 13 15:31:47.642939 zram_generator::config[2246]: No configuration found. Feb 13 15:31:47.831385 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:31:47.906468 systemd[1]: Reloading finished in 344 ms. Feb 13 15:31:47.956494 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:47.959191 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:31:47.959438 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:47.961000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:48.099352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:48.104519 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:31:48.146227 kubelet[2296]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:31:48.146227 kubelet[2296]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:31:48.146227 kubelet[2296]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:31:48.147210 kubelet[2296]: I0213 15:31:48.147168 2296 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:31:48.575332 kubelet[2296]: I0213 15:31:48.575296 2296 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:31:48.575332 kubelet[2296]: I0213 15:31:48.575321 2296 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:31:48.575517 kubelet[2296]: I0213 15:31:48.575501 2296 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:31:48.590186 kubelet[2296]: I0213 15:31:48.590156 2296 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:31:48.590651 kubelet[2296]: E0213 15:31:48.590602 2296 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:48.600078 kubelet[2296]: I0213 15:31:48.600056 2296 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:31:48.600294 kubelet[2296]: I0213 15:31:48.600274 2296 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:31:48.600439 kubelet[2296]: I0213 15:31:48.600294 2296 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:31:48.600877 kubelet[2296]: I0213 15:31:48.600859 2296 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:31:48.600877 kubelet[2296]: I0213 15:31:48.600874 2296 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:31:48.601411 kubelet[2296]: I0213 15:31:48.601393 2296 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:31:48.601993 kubelet[2296]: I0213 15:31:48.601979 2296 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:31:48.601993 kubelet[2296]: I0213 15:31:48.601993 2296 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:31:48.602044 kubelet[2296]: I0213 15:31:48.602014 2296 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:31:48.602044 kubelet[2296]: I0213 15:31:48.602035 2296 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:31:48.602521 kubelet[2296]: W0213 15:31:48.602438 2296 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:48.602521 kubelet[2296]: E0213 15:31:48.602497 2296 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:48.603175 kubelet[2296]: W0213 15:31:48.603141 2296 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:48.603214 kubelet[2296]: E0213 15:31:48.603185 2296 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:48.606219 kubelet[2296]: I0213 15:31:48.606191 2296 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:31:48.607561 kubelet[2296]: I0213 15:31:48.607537 2296 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:31:48.607607 kubelet[2296]: W0213 15:31:48.607594 2296 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:31:48.608371 kubelet[2296]: I0213 15:31:48.608278 2296 server.go:1264] "Started kubelet" Feb 13 15:31:48.611827 kubelet[2296]: I0213 15:31:48.609519 2296 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:31:48.611827 kubelet[2296]: I0213 15:31:48.609759 2296 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:31:48.611827 kubelet[2296]: I0213 15:31:48.609787 2296 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:31:48.611827 kubelet[2296]: I0213 15:31:48.609791 2296 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:31:48.611827 kubelet[2296]: I0213 15:31:48.610576 2296 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:31:48.612910 kubelet[2296]: E0213 15:31:48.612804 2296 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.113:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.113:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce4c65582c94 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:31:48.608257172 +0000 UTC m=+0.499654091,LastTimestamp:2025-02-13 15:31:48.608257172 +0000 UTC m=+0.499654091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:31:48.613074 kubelet[2296]: I0213 15:31:48.612983 2296 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:31:48.613285 kubelet[2296]: I0213 15:31:48.612990 2296 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:31:48.613401 kubelet[2296]: W0213 15:31:48.613343 2296 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:48.613569 kubelet[2296]: E0213 15:31:48.613557 2296 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:48.613641 kubelet[2296]: E0213 15:31:48.613569 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="200ms" Feb 13 15:31:48.613701 kubelet[2296]: I0213 15:31:48.613403 2296 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:31:48.614231 kubelet[2296]: I0213 15:31:48.614213 2296 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:31:48.614299 kubelet[2296]: I0213 15:31:48.614281 2296 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:31:48.614495 kubelet[2296]: E0213 15:31:48.614481 2296 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:31:48.614954 kubelet[2296]: I0213 15:31:48.614939 2296 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:31:48.631310 kubelet[2296]: I0213 15:31:48.631262 2296 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:31:48.632620 kubelet[2296]: I0213 15:31:48.632544 2296 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:31:48.632620 kubelet[2296]: I0213 15:31:48.632583 2296 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:31:48.632620 kubelet[2296]: I0213 15:31:48.632599 2296 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:31:48.632771 kubelet[2296]: I0213 15:31:48.632672 2296 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:31:48.632771 kubelet[2296]: I0213 15:31:48.632692 2296 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:31:48.632771 kubelet[2296]: I0213 15:31:48.632705 2296 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:31:48.632771 kubelet[2296]: E0213 15:31:48.632738 2296 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:31:48.633473 kubelet[2296]: W0213 15:31:48.633424 2296 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:48.633473 kubelet[2296]: E0213 15:31:48.633473 2296 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:48.636590 kubelet[2296]: I0213 15:31:48.636571 2296 policy_none.go:49] "None policy: Start" Feb 13 15:31:48.637058 kubelet[2296]: I0213 15:31:48.637044 2296 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:31:48.637152 kubelet[2296]: I0213 15:31:48.637065 2296 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:31:48.643338 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:31:48.655519 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:31:48.658164 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:31:48.667768 kubelet[2296]: I0213 15:31:48.667701 2296 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:31:48.667955 kubelet[2296]: I0213 15:31:48.667919 2296 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:31:48.668071 kubelet[2296]: I0213 15:31:48.668047 2296 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:31:48.668898 kubelet[2296]: E0213 15:31:48.668860 2296 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:31:48.714995 kubelet[2296]: I0213 15:31:48.714881 2296 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:31:48.715375 kubelet[2296]: E0213 15:31:48.715319 2296 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 13 15:31:48.733071 kubelet[2296]: I0213 15:31:48.733037 2296 topology_manager.go:215] "Topology Admit Handler" podUID="0beba8fa7ac9416f2a8a701518aa25b0" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:31:48.733850 kubelet[2296]: I0213 15:31:48.733830 2296 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:31:48.734588 kubelet[2296]: I0213 15:31:48.734571 2296 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:31:48.739604 systemd[1]: Created slice kubepods-burstable-pod0beba8fa7ac9416f2a8a701518aa25b0.slice - libcontainer container kubepods-burstable-pod0beba8fa7ac9416f2a8a701518aa25b0.slice. Feb 13 15:31:48.759621 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 15:31:48.763152 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 15:31:48.815140 kubelet[2296]: E0213 15:31:48.815074 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="400ms" Feb 13 15:31:48.915754 kubelet[2296]: I0213 15:31:48.915630 2296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:31:48.915754 kubelet[2296]: I0213 15:31:48.915697 2296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:31:48.915754 kubelet[2296]: I0213 15:31:48.915743 2296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:31:48.915949 kubelet[2296]: I0213 15:31:48.915760 2296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0beba8fa7ac9416f2a8a701518aa25b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0beba8fa7ac9416f2a8a701518aa25b0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:31:48.915949 kubelet[2296]: I0213 15:31:48.915775 2296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0beba8fa7ac9416f2a8a701518aa25b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0beba8fa7ac9416f2a8a701518aa25b0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:31:48.915949 kubelet[2296]: I0213 15:31:48.915793 2296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:31:48.915949 kubelet[2296]: I0213 15:31:48.915806 2296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:31:48.915949 kubelet[2296]: I0213 15:31:48.915821 2296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:31:48.916100 kubelet[2296]: I0213 15:31:48.915833 2296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0beba8fa7ac9416f2a8a701518aa25b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0beba8fa7ac9416f2a8a701518aa25b0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:31:48.917186 kubelet[2296]: I0213 15:31:48.917082 2296 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:31:48.917449 kubelet[2296]: E0213 15:31:48.917428 2296 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 13 15:31:49.057190 kubelet[2296]: E0213 15:31:49.057145 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:49.057798 containerd[1468]: time="2025-02-13T15:31:49.057756754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0beba8fa7ac9416f2a8a701518aa25b0,Namespace:kube-system,Attempt:0,}" Feb 13 15:31:49.061942 kubelet[2296]: E0213 15:31:49.061925 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:49.062263 containerd[1468]: time="2025-02-13T15:31:49.062234582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 15:31:49.065503 kubelet[2296]: E0213 15:31:49.065488 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:49.065799 containerd[1468]: time="2025-02-13T15:31:49.065770563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 15:31:49.216567 kubelet[2296]: E0213 15:31:49.216457 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="800ms" Feb 13 15:31:49.319303 kubelet[2296]: I0213 15:31:49.319272 2296 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:31:49.319590 kubelet[2296]: E0213 15:31:49.319565 2296 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 13 15:31:49.546366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3879515081.mount: Deactivated successfully. Feb 13 15:31:49.552673 containerd[1468]: time="2025-02-13T15:31:49.552633465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:31:49.555186 containerd[1468]: time="2025-02-13T15:31:49.555146307Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:31:49.556105 containerd[1468]: time="2025-02-13T15:31:49.556086039Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:31:49.558079 containerd[1468]: time="2025-02-13T15:31:49.558045424Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:31:49.558840 containerd[1468]: time="2025-02-13T15:31:49.558796372Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:31:49.559836 containerd[1468]: time="2025-02-13T15:31:49.559803571Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:31:49.560744 containerd[1468]: time="2025-02-13T15:31:49.560701966Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:31:49.563349 containerd[1468]: time="2025-02-13T15:31:49.563324404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:31:49.564110 containerd[1468]: time="2025-02-13T15:31:49.564089819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 501.794914ms" Feb 13 15:31:49.567004 containerd[1468]: time="2025-02-13T15:31:49.566980981Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 509.106516ms" Feb 13 15:31:49.567671 containerd[1468]: time="2025-02-13T15:31:49.567651639Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 501.799793ms" Feb 13 15:31:49.598373 kubelet[2296]: W0213 15:31:49.598283 2296 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:49.598525 kubelet[2296]: E0213 15:31:49.598377 2296 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:49.641163 kubelet[2296]: W0213 15:31:49.641089 2296 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:49.641163 kubelet[2296]: E0213 15:31:49.641160 2296 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:49.679060 kubelet[2296]: W0213 15:31:49.678980 2296 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:49.679060 kubelet[2296]: E0213 15:31:49.679068 2296 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:49.812803 containerd[1468]: time="2025-02-13T15:31:49.812593182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:31:49.812981 containerd[1468]: time="2025-02-13T15:31:49.812925215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:31:49.813042 containerd[1468]: time="2025-02-13T15:31:49.812971000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:49.813186 containerd[1468]: time="2025-02-13T15:31:49.813153974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:49.814263 containerd[1468]: time="2025-02-13T15:31:49.811728270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:31:49.814263 containerd[1468]: time="2025-02-13T15:31:49.814097313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:31:49.814263 containerd[1468]: time="2025-02-13T15:31:49.814110147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:49.814263 containerd[1468]: time="2025-02-13T15:31:49.814181581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:49.821594 containerd[1468]: time="2025-02-13T15:31:49.821471462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:31:49.821594 containerd[1468]: time="2025-02-13T15:31:49.821534600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:31:49.821594 containerd[1468]: time="2025-02-13T15:31:49.821547565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:49.821769 containerd[1468]: time="2025-02-13T15:31:49.821634528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:31:49.880166 systemd[1]: Started cri-containerd-61a10a60d276aa2f05fca2e03ce36a055aec100d1d91f597dc751a4a48535fe0.scope - libcontainer container 61a10a60d276aa2f05fca2e03ce36a055aec100d1d91f597dc751a4a48535fe0. Feb 13 15:31:49.884372 systemd[1]: Started cri-containerd-9d6472c6aec1822ef19689bd27078f93a2fa6ed812953dacbcf58bd91f59df57.scope - libcontainer container 9d6472c6aec1822ef19689bd27078f93a2fa6ed812953dacbcf58bd91f59df57. Feb 13 15:31:49.888757 systemd[1]: Started cri-containerd-d29d16f3280c8b9fdfc8d2b5e9ef8862f6990bb427c1d773ae6939ebac782c3a.scope - libcontainer container d29d16f3280c8b9fdfc8d2b5e9ef8862f6990bb427c1d773ae6939ebac782c3a. Feb 13 15:31:49.933390 containerd[1468]: time="2025-02-13T15:31:49.933288729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0beba8fa7ac9416f2a8a701518aa25b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"61a10a60d276aa2f05fca2e03ce36a055aec100d1d91f597dc751a4a48535fe0\"" Feb 13 15:31:49.934653 kubelet[2296]: E0213 15:31:49.934604 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:49.937420 containerd[1468]: time="2025-02-13T15:31:49.937325720Z" level=info msg="CreateContainer within sandbox \"61a10a60d276aa2f05fca2e03ce36a055aec100d1d91f597dc751a4a48535fe0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:31:49.938464 containerd[1468]: time="2025-02-13T15:31:49.938417918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d6472c6aec1822ef19689bd27078f93a2fa6ed812953dacbcf58bd91f59df57\"" Feb 13 15:31:49.939046 kubelet[2296]: E0213 15:31:49.939014 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:49.941409 containerd[1468]: time="2025-02-13T15:31:49.940778695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"d29d16f3280c8b9fdfc8d2b5e9ef8862f6990bb427c1d773ae6939ebac782c3a\"" Feb 13 15:31:49.941498 containerd[1468]: time="2025-02-13T15:31:49.941411622Z" level=info msg="CreateContainer within sandbox \"9d6472c6aec1822ef19689bd27078f93a2fa6ed812953dacbcf58bd91f59df57\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:31:49.942077 kubelet[2296]: E0213 15:31:49.942049 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:49.943699 containerd[1468]: time="2025-02-13T15:31:49.943641284Z" level=info msg="CreateContainer within sandbox \"d29d16f3280c8b9fdfc8d2b5e9ef8862f6990bb427c1d773ae6939ebac782c3a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:31:50.017350 kubelet[2296]: E0213 15:31:50.017283 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="1.6s" Feb 13 15:31:50.121287 kubelet[2296]: I0213 15:31:50.121169 2296 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:31:50.121593 kubelet[2296]: E0213 15:31:50.121510 2296 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 13 15:31:50.141625 containerd[1468]: time="2025-02-13T15:31:50.141580991Z" level=info msg="CreateContainer within sandbox \"9d6472c6aec1822ef19689bd27078f93a2fa6ed812953dacbcf58bd91f59df57\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b1cf314c6b05efd1fcf41adcb2ec3536a403aa74fb81bcff8ac39e81c0e96ce8\"" Feb 13 15:31:50.142213 containerd[1468]: time="2025-02-13T15:31:50.142172471Z" level=info msg="StartContainer for \"b1cf314c6b05efd1fcf41adcb2ec3536a403aa74fb81bcff8ac39e81c0e96ce8\"" Feb 13 15:31:50.143798 containerd[1468]: time="2025-02-13T15:31:50.143765739Z" level=info msg="CreateContainer within sandbox \"61a10a60d276aa2f05fca2e03ce36a055aec100d1d91f597dc751a4a48535fe0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8ef15f0ab8279b32d2e1609929d7788ed6a633e47465f400f5ec4d1cb3e3fee5\"" Feb 13 15:31:50.144160 containerd[1468]: time="2025-02-13T15:31:50.144121676Z" level=info msg="StartContainer for \"8ef15f0ab8279b32d2e1609929d7788ed6a633e47465f400f5ec4d1cb3e3fee5\"" Feb 13 15:31:50.147552 containerd[1468]: time="2025-02-13T15:31:50.147520380Z" level=info msg="CreateContainer within sandbox \"d29d16f3280c8b9fdfc8d2b5e9ef8862f6990bb427c1d773ae6939ebac782c3a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"74e261ba55e394582bade61894cac63de0320de98ec813787d6278a2e41010db\"" Feb 13 15:31:50.148003 containerd[1468]: time="2025-02-13T15:31:50.147888059Z" level=info msg="StartContainer for \"74e261ba55e394582bade61894cac63de0320de98ec813787d6278a2e41010db\"" Feb 13 15:31:50.171128 systemd[1]: Started cri-containerd-b1cf314c6b05efd1fcf41adcb2ec3536a403aa74fb81bcff8ac39e81c0e96ce8.scope - libcontainer container b1cf314c6b05efd1fcf41adcb2ec3536a403aa74fb81bcff8ac39e81c0e96ce8. Feb 13 15:31:50.174197 systemd[1]: Started cri-containerd-8ef15f0ab8279b32d2e1609929d7788ed6a633e47465f400f5ec4d1cb3e3fee5.scope - libcontainer container 8ef15f0ab8279b32d2e1609929d7788ed6a633e47465f400f5ec4d1cb3e3fee5. Feb 13 15:31:50.176038 kubelet[2296]: W0213 15:31:50.175987 2296 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:50.176124 kubelet[2296]: E0213 15:31:50.176048 2296 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:31:50.178096 systemd[1]: Started cri-containerd-74e261ba55e394582bade61894cac63de0320de98ec813787d6278a2e41010db.scope - libcontainer container 74e261ba55e394582bade61894cac63de0320de98ec813787d6278a2e41010db. Feb 13 15:31:50.217036 containerd[1468]: time="2025-02-13T15:31:50.216983789Z" level=info msg="StartContainer for \"b1cf314c6b05efd1fcf41adcb2ec3536a403aa74fb81bcff8ac39e81c0e96ce8\" returns successfully" Feb 13 15:31:50.221702 containerd[1468]: time="2025-02-13T15:31:50.221663455Z" level=info msg="StartContainer for \"8ef15f0ab8279b32d2e1609929d7788ed6a633e47465f400f5ec4d1cb3e3fee5\" returns successfully" Feb 13 15:31:50.225620 containerd[1468]: time="2025-02-13T15:31:50.225589387Z" level=info msg="StartContainer for \"74e261ba55e394582bade61894cac63de0320de98ec813787d6278a2e41010db\" returns successfully" Feb 13 15:31:50.640203 kubelet[2296]: E0213 15:31:50.640166 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:50.642915 kubelet[2296]: E0213 15:31:50.642515 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:50.687228 kubelet[2296]: E0213 15:31:50.687189 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:51.626569 kubelet[2296]: E0213 15:31:51.626520 2296 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:31:51.647458 kubelet[2296]: E0213 15:31:51.647409 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:51.722725 kubelet[2296]: I0213 15:31:51.722674 2296 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:31:51.731952 kubelet[2296]: I0213 15:31:51.731923 2296 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:31:51.737417 kubelet[2296]: E0213 15:31:51.737372 2296 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:31:51.837863 kubelet[2296]: E0213 15:31:51.837821 2296 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:31:51.938260 kubelet[2296]: E0213 15:31:51.938140 2296 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:31:52.038916 kubelet[2296]: E0213 15:31:52.038860 2296 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:31:52.139925 kubelet[2296]: E0213 15:31:52.139878 2296 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:31:52.240732 kubelet[2296]: E0213 15:31:52.240684 2296 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:31:52.341272 kubelet[2296]: E0213 15:31:52.341228 2296 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:31:52.441834 kubelet[2296]: E0213 15:31:52.441784 2296 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:31:52.542893 kubelet[2296]: E0213 15:31:52.542755 2296 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:31:52.643593 kubelet[2296]: E0213 15:31:52.643542 2296 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:31:52.744389 kubelet[2296]: E0213 15:31:52.744326 2296 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:31:53.539428 systemd[1]: Reloading requested from client PID 2579 ('systemctl') (unit session-9.scope)... Feb 13 15:31:53.539446 systemd[1]: Reloading... Feb 13 15:31:53.607000 kubelet[2296]: I0213 15:31:53.606966 2296 apiserver.go:52] "Watching apiserver" Feb 13 15:31:53.613818 kubelet[2296]: I0213 15:31:53.613785 2296 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:31:53.625954 zram_generator::config[2624]: No configuration found. Feb 13 15:31:53.725349 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:31:53.812640 systemd[1]: Reloading finished in 272 ms. Feb 13 15:31:53.853270 kubelet[2296]: I0213 15:31:53.853210 2296 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:31:53.853267 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:53.874380 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:31:53.874683 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:53.874734 systemd[1]: kubelet.service: Consumed 1.061s CPU time, 117.7M memory peak, 0B memory swap peak. Feb 13 15:31:53.888130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:54.031555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:54.036389 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:31:54.082659 kubelet[2663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:31:54.082659 kubelet[2663]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:31:54.082659 kubelet[2663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:31:54.082659 kubelet[2663]: I0213 15:31:54.082624 2663 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:31:54.087099 kubelet[2663]: I0213 15:31:54.087070 2663 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:31:54.087099 kubelet[2663]: I0213 15:31:54.087090 2663 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:31:54.087236 kubelet[2663]: I0213 15:31:54.087221 2663 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:31:54.088317 kubelet[2663]: I0213 15:31:54.088293 2663 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:31:54.089412 kubelet[2663]: I0213 15:31:54.089362 2663 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:31:54.099068 kubelet[2663]: I0213 15:31:54.099030 2663 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:31:54.099302 kubelet[2663]: I0213 15:31:54.099265 2663 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:31:54.099489 kubelet[2663]: I0213 15:31:54.099295 2663 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:31:54.099585 kubelet[2663]: I0213 15:31:54.099499 2663 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:31:54.099585 kubelet[2663]: I0213 15:31:54.099510 2663 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:31:54.099585 kubelet[2663]: I0213 15:31:54.099555 2663 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:31:54.099690 kubelet[2663]: I0213 15:31:54.099675 2663 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:31:54.099690 kubelet[2663]: I0213 15:31:54.099689 2663 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:31:54.099744 kubelet[2663]: I0213 15:31:54.099711 2663 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:31:54.099744 kubelet[2663]: I0213 15:31:54.099729 2663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:31:54.100550 kubelet[2663]: I0213 15:31:54.100269 2663 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:31:54.100550 kubelet[2663]: I0213 15:31:54.100439 2663 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:31:54.100851 kubelet[2663]: I0213 15:31:54.100828 2663 server.go:1264] "Started kubelet" Feb 13 15:31:54.103920 kubelet[2663]: I0213 15:31:54.101202 2663 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:31:54.103920 kubelet[2663]: I0213 15:31:54.102381 2663 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:31:54.103920 kubelet[2663]: I0213 15:31:54.102787 2663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:31:54.103920 kubelet[2663]: I0213 15:31:54.101201 2663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:31:54.103920 kubelet[2663]: I0213 15:31:54.103506 2663 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:31:54.109697 kubelet[2663]: E0213 15:31:54.109666 2663 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:31:54.109763 kubelet[2663]: I0213 15:31:54.109744 2663 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:31:54.109950 kubelet[2663]: I0213 15:31:54.109930 2663 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:31:54.110154 kubelet[2663]: I0213 15:31:54.110132 2663 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:31:54.111251 kubelet[2663]: E0213 15:31:54.111229 2663 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:31:54.111548 kubelet[2663]: I0213 15:31:54.111529 2663 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:31:54.111698 kubelet[2663]: I0213 15:31:54.111676 2663 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:31:54.113051 kubelet[2663]: I0213 15:31:54.113033 2663 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:31:54.123527 kubelet[2663]: I0213 15:31:54.123494 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:31:54.125121 kubelet[2663]: I0213 15:31:54.125104 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:31:54.125257 kubelet[2663]: I0213 15:31:54.125220 2663 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:31:54.125368 kubelet[2663]: I0213 15:31:54.125358 2663 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:31:54.125537 kubelet[2663]: E0213 15:31:54.125502 2663 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:31:54.150042 kubelet[2663]: I0213 15:31:54.150010 2663 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:31:54.150186 kubelet[2663]: I0213 15:31:54.150174 2663 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:31:54.150270 kubelet[2663]: I0213 15:31:54.150260 2663 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:31:54.150466 kubelet[2663]: I0213 15:31:54.150452 2663 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:31:54.150535 kubelet[2663]: I0213 15:31:54.150513 2663 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:31:54.150582 kubelet[2663]: I0213 15:31:54.150573 2663 policy_none.go:49] "None policy: Start" Feb 13 15:31:54.151223 kubelet[2663]: I0213 15:31:54.151210 2663 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:31:54.151307 kubelet[2663]: I0213 15:31:54.151298 2663 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:31:54.151468 kubelet[2663]: I0213 15:31:54.151457 2663 state_mem.go:75] "Updated machine memory state" Feb 13 15:31:54.155366 kubelet[2663]: I0213 15:31:54.155329 2663 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:31:54.155570 kubelet[2663]: I0213 15:31:54.155524 2663 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:31:54.155731 kubelet[2663]: I0213 15:31:54.155637 2663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:31:54.214175 kubelet[2663]: I0213 15:31:54.214137 2663 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:31:54.220818 kubelet[2663]: I0213 15:31:54.220774 2663 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:31:54.220963 kubelet[2663]: I0213 15:31:54.220855 2663 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:31:54.226636 kubelet[2663]: I0213 15:31:54.226585 2663 topology_manager.go:215] "Topology Admit Handler" podUID="0beba8fa7ac9416f2a8a701518aa25b0" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:31:54.226716 kubelet[2663]: I0213 15:31:54.226691 2663 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:31:54.227036 kubelet[2663]: I0213 15:31:54.227012 2663 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:31:54.310464 kubelet[2663]: I0213 15:31:54.310420 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0beba8fa7ac9416f2a8a701518aa25b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0beba8fa7ac9416f2a8a701518aa25b0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:31:54.310464 kubelet[2663]: I0213 15:31:54.310465 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:31:54.310464 kubelet[2663]: I0213 15:31:54.310486 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:31:54.310640 kubelet[2663]: I0213 15:31:54.310502 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:31:54.310640 kubelet[2663]: I0213 15:31:54.310576 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:31:54.310640 kubelet[2663]: I0213 15:31:54.310612 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0beba8fa7ac9416f2a8a701518aa25b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0beba8fa7ac9416f2a8a701518aa25b0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:31:54.310640 kubelet[2663]: I0213 15:31:54.310627 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0beba8fa7ac9416f2a8a701518aa25b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0beba8fa7ac9416f2a8a701518aa25b0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:31:54.310731 kubelet[2663]: I0213 15:31:54.310644 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:31:54.310731 kubelet[2663]: I0213 15:31:54.310666 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:31:54.538485 kubelet[2663]: E0213 15:31:54.538417 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:54.539084 kubelet[2663]: E0213 15:31:54.538760 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:54.539084 kubelet[2663]: E0213 15:31:54.539012 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:55.100603 kubelet[2663]: I0213 15:31:55.099972 2663 apiserver.go:52] "Watching apiserver" Feb 13 15:31:55.111004 kubelet[2663]: I0213 15:31:55.110972 2663 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:31:55.143671 kubelet[2663]: E0213 15:31:55.143629 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:55.144200 kubelet[2663]: E0213 15:31:55.144171 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:55.151928 kubelet[2663]: E0213 15:31:55.151523 2663 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:31:55.151928 kubelet[2663]: E0213 15:31:55.151884 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:55.206210 kubelet[2663]: I0213 15:31:55.206140 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.206111921 podStartE2EDuration="1.206111921s" podCreationTimestamp="2025-02-13 15:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:31:55.192128917 +0000 UTC m=+1.151165512" watchObservedRunningTime="2025-02-13 15:31:55.206111921 +0000 UTC m=+1.165148516" Feb 13 15:31:55.214346 kubelet[2663]: I0213 15:31:55.214290 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.214273954 podStartE2EDuration="1.214273954s" podCreationTimestamp="2025-02-13 15:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:31:55.206609263 +0000 UTC m=+1.165645858" watchObservedRunningTime="2025-02-13 15:31:55.214273954 +0000 UTC m=+1.173310549" Feb 13 15:31:56.145390 kubelet[2663]: E0213 15:31:56.145347 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:57.146850 kubelet[2663]: E0213 15:31:57.146802 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:58.254998 kubelet[2663]: E0213 15:31:58.254957 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:31:58.674110 sudo[1664]: pam_unix(sudo:session): session closed for user root Feb 13 15:31:58.676105 sshd[1663]: Connection closed by 10.0.0.1 port 56862 Feb 13 15:31:58.676558 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:58.682124 systemd[1]: sshd@8-10.0.0.113:22-10.0.0.1:56862.service: Deactivated successfully. Feb 13 15:31:58.684929 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:31:58.685171 systemd[1]: session-9.scope: Consumed 4.815s CPU time, 189.2M memory peak, 0B memory swap peak. Feb 13 15:31:58.685768 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:31:58.686821 systemd-logind[1451]: Removed session 9. Feb 13 15:32:03.694437 kubelet[2663]: E0213 15:32:03.694403 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:03.729857 kubelet[2663]: I0213 15:32:03.729799 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=9.72978378 podStartE2EDuration="9.72978378s" podCreationTimestamp="2025-02-13 15:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:31:55.214482544 +0000 UTC m=+1.173519139" watchObservedRunningTime="2025-02-13 15:32:03.72978378 +0000 UTC m=+9.688820375" Feb 13 15:32:04.119378 kubelet[2663]: E0213 15:32:04.119338 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:04.156301 kubelet[2663]: E0213 15:32:04.156123 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:04.156301 kubelet[2663]: E0213 15:32:04.156238 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:04.588998 update_engine[1452]: I20250213 15:32:04.588941 1452 update_attempter.cc:509] Updating boot flags... Feb 13 15:32:04.614819 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2760) Feb 13 15:32:04.653925 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2763) Feb 13 15:32:04.687941 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2763) Feb 13 15:32:08.260507 kubelet[2663]: E0213 15:32:08.260453 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:08.548073 kubelet[2663]: I0213 15:32:08.547965 2663 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:32:08.548385 containerd[1468]: time="2025-02-13T15:32:08.548338414Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:32:08.548724 kubelet[2663]: I0213 15:32:08.548622 2663 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:32:09.163331 kubelet[2663]: E0213 15:32:09.163298 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:09.415565 kubelet[2663]: I0213 15:32:09.415419 2663 topology_manager.go:215] "Topology Admit Handler" podUID="6359937d-5082-4e2f-b5c9-616e08220739" podNamespace="kube-system" podName="kube-proxy-hmxk2" Feb 13 15:32:09.425242 systemd[1]: Created slice kubepods-besteffort-pod6359937d_5082_4e2f_b5c9_616e08220739.slice - libcontainer container kubepods-besteffort-pod6359937d_5082_4e2f_b5c9_616e08220739.slice. Feb 13 15:32:09.502675 kubelet[2663]: I0213 15:32:09.502641 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6359937d-5082-4e2f-b5c9-616e08220739-kube-proxy\") pod \"kube-proxy-hmxk2\" (UID: \"6359937d-5082-4e2f-b5c9-616e08220739\") " pod="kube-system/kube-proxy-hmxk2" Feb 13 15:32:09.502675 kubelet[2663]: I0213 15:32:09.502689 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6359937d-5082-4e2f-b5c9-616e08220739-xtables-lock\") pod \"kube-proxy-hmxk2\" (UID: \"6359937d-5082-4e2f-b5c9-616e08220739\") " pod="kube-system/kube-proxy-hmxk2" Feb 13 15:32:09.502832 kubelet[2663]: I0213 15:32:09.502710 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6359937d-5082-4e2f-b5c9-616e08220739-lib-modules\") pod \"kube-proxy-hmxk2\" (UID: \"6359937d-5082-4e2f-b5c9-616e08220739\") " pod="kube-system/kube-proxy-hmxk2" Feb 13 15:32:09.502832 kubelet[2663]: I0213 15:32:09.502732 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x997\" (UniqueName: \"kubernetes.io/projected/6359937d-5082-4e2f-b5c9-616e08220739-kube-api-access-4x997\") pod \"kube-proxy-hmxk2\" (UID: \"6359937d-5082-4e2f-b5c9-616e08220739\") " pod="kube-system/kube-proxy-hmxk2" Feb 13 15:32:09.686101 kubelet[2663]: E0213 15:32:09.685976 2663 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:32:09.686101 kubelet[2663]: E0213 15:32:09.686022 2663 projected.go:200] Error preparing data for projected volume kube-api-access-4x997 for pod kube-system/kube-proxy-hmxk2: configmap "kube-root-ca.crt" not found Feb 13 15:32:09.686101 kubelet[2663]: E0213 15:32:09.686095 2663 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6359937d-5082-4e2f-b5c9-616e08220739-kube-api-access-4x997 podName:6359937d-5082-4e2f-b5c9-616e08220739 nodeName:}" failed. No retries permitted until 2025-02-13 15:32:10.18607391 +0000 UTC m=+16.145110505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4x997" (UniqueName: "kubernetes.io/projected/6359937d-5082-4e2f-b5c9-616e08220739-kube-api-access-4x997") pod "kube-proxy-hmxk2" (UID: "6359937d-5082-4e2f-b5c9-616e08220739") : configmap "kube-root-ca.crt" not found Feb 13 15:32:09.734030 kubelet[2663]: I0213 15:32:09.733974 2663 topology_manager.go:215] "Topology Admit Handler" podUID="d589e290-90a1-42b6-99da-8f6006da5988" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-b2zdj" Feb 13 15:32:09.742226 systemd[1]: Created slice kubepods-besteffort-podd589e290_90a1_42b6_99da_8f6006da5988.slice - libcontainer container kubepods-besteffort-podd589e290_90a1_42b6_99da_8f6006da5988.slice. Feb 13 15:32:09.805626 kubelet[2663]: I0213 15:32:09.805593 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d589e290-90a1-42b6-99da-8f6006da5988-var-lib-calico\") pod \"tigera-operator-7bc55997bb-b2zdj\" (UID: \"d589e290-90a1-42b6-99da-8f6006da5988\") " pod="tigera-operator/tigera-operator-7bc55997bb-b2zdj" Feb 13 15:32:09.805626 kubelet[2663]: I0213 15:32:09.805626 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxdxd\" (UniqueName: \"kubernetes.io/projected/d589e290-90a1-42b6-99da-8f6006da5988-kube-api-access-dxdxd\") pod \"tigera-operator-7bc55997bb-b2zdj\" (UID: \"d589e290-90a1-42b6-99da-8f6006da5988\") " pod="tigera-operator/tigera-operator-7bc55997bb-b2zdj" Feb 13 15:32:10.045578 containerd[1468]: time="2025-02-13T15:32:10.045524348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-b2zdj,Uid:d589e290-90a1-42b6-99da-8f6006da5988,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:32:10.336207 kubelet[2663]: E0213 15:32:10.336095 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:10.336563 containerd[1468]: time="2025-02-13T15:32:10.336512892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hmxk2,Uid:6359937d-5082-4e2f-b5c9-616e08220739,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:10.392881 containerd[1468]: time="2025-02-13T15:32:10.392088834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:10.392881 containerd[1468]: time="2025-02-13T15:32:10.392789728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:10.393067 containerd[1468]: time="2025-02-13T15:32:10.392819445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:10.393067 containerd[1468]: time="2025-02-13T15:32:10.392963477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:10.406797 containerd[1468]: time="2025-02-13T15:32:10.406700630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:10.407118 containerd[1468]: time="2025-02-13T15:32:10.407000998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:10.407118 containerd[1468]: time="2025-02-13T15:32:10.407053507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:10.407325 containerd[1468]: time="2025-02-13T15:32:10.407253325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:10.417145 systemd[1]: Started cri-containerd-982bcb56c3b3abfab11d83a4005097e77a8d4e2698b2a6af29c3c98414120aa8.scope - libcontainer container 982bcb56c3b3abfab11d83a4005097e77a8d4e2698b2a6af29c3c98414120aa8. Feb 13 15:32:10.422307 systemd[1]: Started cri-containerd-9f62fd09b47e8ac3beb1c96039cf1ba6e4fe92c54fcd83d53edcdc55e72a344c.scope - libcontainer container 9f62fd09b47e8ac3beb1c96039cf1ba6e4fe92c54fcd83d53edcdc55e72a344c. Feb 13 15:32:10.451835 containerd[1468]: time="2025-02-13T15:32:10.451627875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hmxk2,Uid:6359937d-5082-4e2f-b5c9-616e08220739,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f62fd09b47e8ac3beb1c96039cf1ba6e4fe92c54fcd83d53edcdc55e72a344c\"" Feb 13 15:32:10.453082 kubelet[2663]: E0213 15:32:10.452392 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:10.454944 containerd[1468]: time="2025-02-13T15:32:10.454830468Z" level=info msg="CreateContainer within sandbox \"9f62fd09b47e8ac3beb1c96039cf1ba6e4fe92c54fcd83d53edcdc55e72a344c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:32:10.456354 containerd[1468]: time="2025-02-13T15:32:10.456331826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-b2zdj,Uid:d589e290-90a1-42b6-99da-8f6006da5988,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"982bcb56c3b3abfab11d83a4005097e77a8d4e2698b2a6af29c3c98414120aa8\"" Feb 13 15:32:10.460487 containerd[1468]: time="2025-02-13T15:32:10.460241906Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:32:10.477100 containerd[1468]: time="2025-02-13T15:32:10.477055755Z" level=info msg="CreateContainer within sandbox \"9f62fd09b47e8ac3beb1c96039cf1ba6e4fe92c54fcd83d53edcdc55e72a344c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2c9b433f582f690aaaf493f8958b6e45a4648d42c7f22bfdc031258710acacf0\"" Feb 13 15:32:10.477705 containerd[1468]: time="2025-02-13T15:32:10.477636763Z" level=info msg="StartContainer for \"2c9b433f582f690aaaf493f8958b6e45a4648d42c7f22bfdc031258710acacf0\"" Feb 13 15:32:10.508049 systemd[1]: Started cri-containerd-2c9b433f582f690aaaf493f8958b6e45a4648d42c7f22bfdc031258710acacf0.scope - libcontainer container 2c9b433f582f690aaaf493f8958b6e45a4648d42c7f22bfdc031258710acacf0. Feb 13 15:32:10.543218 containerd[1468]: time="2025-02-13T15:32:10.543170135Z" level=info msg="StartContainer for \"2c9b433f582f690aaaf493f8958b6e45a4648d42c7f22bfdc031258710acacf0\" returns successfully" Feb 13 15:32:11.167894 kubelet[2663]: E0213 15:32:11.167852 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:11.176425 kubelet[2663]: I0213 15:32:11.176246 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hmxk2" podStartSLOduration=2.176227365 podStartE2EDuration="2.176227365s" podCreationTimestamp="2025-02-13 15:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:11.17564806 +0000 UTC m=+17.134684656" watchObservedRunningTime="2025-02-13 15:32:11.176227365 +0000 UTC m=+17.135263960" Feb 13 15:32:12.569337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467985379.mount: Deactivated successfully. Feb 13 15:32:13.226716 containerd[1468]: time="2025-02-13T15:32:13.226659867Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:13.227383 containerd[1468]: time="2025-02-13T15:32:13.227315775Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 15:32:13.228438 containerd[1468]: time="2025-02-13T15:32:13.228394000Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:13.230619 containerd[1468]: time="2025-02-13T15:32:13.230587851Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:13.231330 containerd[1468]: time="2025-02-13T15:32:13.231297730Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.771029285s" Feb 13 15:32:13.231330 containerd[1468]: time="2025-02-13T15:32:13.231326285Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 15:32:13.233268 containerd[1468]: time="2025-02-13T15:32:13.233243974Z" level=info msg="CreateContainer within sandbox \"982bcb56c3b3abfab11d83a4005097e77a8d4e2698b2a6af29c3c98414120aa8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:32:13.245749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197497524.mount: Deactivated successfully. Feb 13 15:32:13.246913 containerd[1468]: time="2025-02-13T15:32:13.246861948Z" level=info msg="CreateContainer within sandbox \"982bcb56c3b3abfab11d83a4005097e77a8d4e2698b2a6af29c3c98414120aa8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6029ec5ec334d9f78766df730a7b742631e5bb6d5e2605737d9239366cc6e237\"" Feb 13 15:32:13.247328 containerd[1468]: time="2025-02-13T15:32:13.247261662Z" level=info msg="StartContainer for \"6029ec5ec334d9f78766df730a7b742631e5bb6d5e2605737d9239366cc6e237\"" Feb 13 15:32:13.274032 systemd[1]: Started cri-containerd-6029ec5ec334d9f78766df730a7b742631e5bb6d5e2605737d9239366cc6e237.scope - libcontainer container 6029ec5ec334d9f78766df730a7b742631e5bb6d5e2605737d9239366cc6e237. Feb 13 15:32:13.299657 containerd[1468]: time="2025-02-13T15:32:13.299617101Z" level=info msg="StartContainer for \"6029ec5ec334d9f78766df730a7b742631e5bb6d5e2605737d9239366cc6e237\" returns successfully" Feb 13 15:32:14.182854 kubelet[2663]: I0213 15:32:14.182760 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-b2zdj" podStartSLOduration=2.409438181 podStartE2EDuration="5.182744176s" podCreationTimestamp="2025-02-13 15:32:09 +0000 UTC" firstStartedPulling="2025-02-13 15:32:10.458740148 +0000 UTC m=+16.417776743" lastFinishedPulling="2025-02-13 15:32:13.232046143 +0000 UTC m=+19.191082738" observedRunningTime="2025-02-13 15:32:14.182584404 +0000 UTC m=+20.141620999" watchObservedRunningTime="2025-02-13 15:32:14.182744176 +0000 UTC m=+20.141780771" Feb 13 15:32:16.273549 kubelet[2663]: I0213 15:32:16.273502 2663 topology_manager.go:215] "Topology Admit Handler" podUID="d1c0c6f1-0f71-43b5-aba9-ae27b1317f7d" podNamespace="calico-system" podName="calico-typha-5c95954f6b-89mvg" Feb 13 15:32:16.289553 systemd[1]: Created slice kubepods-besteffort-podd1c0c6f1_0f71_43b5_aba9_ae27b1317f7d.slice - libcontainer container kubepods-besteffort-podd1c0c6f1_0f71_43b5_aba9_ae27b1317f7d.slice. Feb 13 15:32:16.322676 kubelet[2663]: I0213 15:32:16.321204 2663 topology_manager.go:215] "Topology Admit Handler" podUID="a055e2cb-c619-4455-bad5-bfe0e9bf622c" podNamespace="calico-system" podName="calico-node-svldr" Feb 13 15:32:16.333603 systemd[1]: Created slice kubepods-besteffort-poda055e2cb_c619_4455_bad5_bfe0e9bf622c.slice - libcontainer container kubepods-besteffort-poda055e2cb_c619_4455_bad5_bfe0e9bf622c.slice. Feb 13 15:32:16.349069 kubelet[2663]: I0213 15:32:16.349013 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a055e2cb-c619-4455-bad5-bfe0e9bf622c-xtables-lock\") pod \"calico-node-svldr\" (UID: \"a055e2cb-c619-4455-bad5-bfe0e9bf622c\") " pod="calico-system/calico-node-svldr" Feb 13 15:32:16.349180 kubelet[2663]: I0213 15:32:16.349074 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1c0c6f1-0f71-43b5-aba9-ae27b1317f7d-tigera-ca-bundle\") pod \"calico-typha-5c95954f6b-89mvg\" (UID: \"d1c0c6f1-0f71-43b5-aba9-ae27b1317f7d\") " pod="calico-system/calico-typha-5c95954f6b-89mvg" Feb 13 15:32:16.349180 kubelet[2663]: I0213 15:32:16.349100 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a055e2cb-c619-4455-bad5-bfe0e9bf622c-tigera-ca-bundle\") pod \"calico-node-svldr\" (UID: \"a055e2cb-c619-4455-bad5-bfe0e9bf622c\") " pod="calico-system/calico-node-svldr" Feb 13 15:32:16.349180 kubelet[2663]: I0213 15:32:16.349122 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx28n\" (UniqueName: \"kubernetes.io/projected/a055e2cb-c619-4455-bad5-bfe0e9bf622c-kube-api-access-dx28n\") pod \"calico-node-svldr\" (UID: \"a055e2cb-c619-4455-bad5-bfe0e9bf622c\") " pod="calico-system/calico-node-svldr" Feb 13 15:32:16.349180 kubelet[2663]: I0213 15:32:16.349145 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a055e2cb-c619-4455-bad5-bfe0e9bf622c-policysync\") pod \"calico-node-svldr\" (UID: \"a055e2cb-c619-4455-bad5-bfe0e9bf622c\") " pod="calico-system/calico-node-svldr" Feb 13 15:32:16.349180 kubelet[2663]: I0213 15:32:16.349167 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a055e2cb-c619-4455-bad5-bfe0e9bf622c-var-lib-calico\") pod \"calico-node-svldr\" (UID: \"a055e2cb-c619-4455-bad5-bfe0e9bf622c\") " pod="calico-system/calico-node-svldr" Feb 13 15:32:16.349305 kubelet[2663]: I0213 15:32:16.349188 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a055e2cb-c619-4455-bad5-bfe0e9bf622c-cni-bin-dir\") pod \"calico-node-svldr\" (UID: \"a055e2cb-c619-4455-bad5-bfe0e9bf622c\") " pod="calico-system/calico-node-svldr" Feb 13 15:32:16.349305 kubelet[2663]: I0213 15:32:16.349212 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a055e2cb-c619-4455-bad5-bfe0e9bf622c-flexvol-driver-host\") pod \"calico-node-svldr\" (UID: \"a055e2cb-c619-4455-bad5-bfe0e9bf622c\") " pod="calico-system/calico-node-svldr" Feb 13 15:32:16.349305 kubelet[2663]: I0213 15:32:16.349233 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a055e2cb-c619-4455-bad5-bfe0e9bf622c-lib-modules\") pod \"calico-node-svldr\" (UID: \"a055e2cb-c619-4455-bad5-bfe0e9bf622c\") " pod="calico-system/calico-node-svldr" Feb 13 15:32:16.349305 kubelet[2663]: I0213 15:32:16.349253 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a055e2cb-c619-4455-bad5-bfe0e9bf622c-cni-log-dir\") pod \"calico-node-svldr\" (UID: \"a055e2cb-c619-4455-bad5-bfe0e9bf622c\") " pod="calico-system/calico-node-svldr" Feb 13 15:32:16.349305 kubelet[2663]: I0213 15:32:16.349275 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a055e2cb-c619-4455-bad5-bfe0e9bf622c-var-run-calico\") pod \"calico-node-svldr\" (UID: \"a055e2cb-c619-4455-bad5-bfe0e9bf622c\") " pod="calico-system/calico-node-svldr" Feb 13 15:32:16.349445 kubelet[2663]: I0213 15:32:16.349293 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a055e2cb-c619-4455-bad5-bfe0e9bf622c-cni-net-dir\") pod \"calico-node-svldr\" (UID: \"a055e2cb-c619-4455-bad5-bfe0e9bf622c\") " pod="calico-system/calico-node-svldr" Feb 13 15:32:16.349445 kubelet[2663]: I0213 15:32:16.349314 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhw8n\" (UniqueName: \"kubernetes.io/projected/d1c0c6f1-0f71-43b5-aba9-ae27b1317f7d-kube-api-access-dhw8n\") pod \"calico-typha-5c95954f6b-89mvg\" (UID: \"d1c0c6f1-0f71-43b5-aba9-ae27b1317f7d\") " pod="calico-system/calico-typha-5c95954f6b-89mvg" Feb 13 15:32:16.349445 kubelet[2663]: I0213 15:32:16.349337 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d1c0c6f1-0f71-43b5-aba9-ae27b1317f7d-typha-certs\") pod \"calico-typha-5c95954f6b-89mvg\" (UID: \"d1c0c6f1-0f71-43b5-aba9-ae27b1317f7d\") " pod="calico-system/calico-typha-5c95954f6b-89mvg" Feb 13 15:32:16.349445 kubelet[2663]: I0213 15:32:16.349377 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a055e2cb-c619-4455-bad5-bfe0e9bf622c-node-certs\") pod \"calico-node-svldr\" (UID: \"a055e2cb-c619-4455-bad5-bfe0e9bf622c\") " pod="calico-system/calico-node-svldr" Feb 13 15:32:16.438375 kubelet[2663]: I0213 15:32:16.438322 2663 topology_manager.go:215] "Topology Admit Handler" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" podNamespace="calico-system" podName="csi-node-driver-xbpjc" Feb 13 15:32:16.438609 kubelet[2663]: E0213 15:32:16.438574 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xbpjc" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" Feb 13 15:32:16.449875 kubelet[2663]: I0213 15:32:16.449834 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/757af110-1c95-44e4-a60e-64cc5c9b9a1e-kubelet-dir\") pod \"csi-node-driver-xbpjc\" (UID: \"757af110-1c95-44e4-a60e-64cc5c9b9a1e\") " pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:16.451849 kubelet[2663]: I0213 15:32:16.451414 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/757af110-1c95-44e4-a60e-64cc5c9b9a1e-registration-dir\") pod \"csi-node-driver-xbpjc\" (UID: \"757af110-1c95-44e4-a60e-64cc5c9b9a1e\") " pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:16.451849 kubelet[2663]: I0213 15:32:16.451454 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhvkc\" (UniqueName: \"kubernetes.io/projected/757af110-1c95-44e4-a60e-64cc5c9b9a1e-kube-api-access-jhvkc\") pod \"csi-node-driver-xbpjc\" (UID: \"757af110-1c95-44e4-a60e-64cc5c9b9a1e\") " pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:16.451849 kubelet[2663]: I0213 15:32:16.451567 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/757af110-1c95-44e4-a60e-64cc5c9b9a1e-varrun\") pod \"csi-node-driver-xbpjc\" (UID: \"757af110-1c95-44e4-a60e-64cc5c9b9a1e\") " pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:16.451849 kubelet[2663]: I0213 15:32:16.451586 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/757af110-1c95-44e4-a60e-64cc5c9b9a1e-socket-dir\") pod \"csi-node-driver-xbpjc\" (UID: \"757af110-1c95-44e4-a60e-64cc5c9b9a1e\") " pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:16.460505 kubelet[2663]: E0213 15:32:16.460455 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.460505 kubelet[2663]: W0213 15:32:16.460483 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.460623 kubelet[2663]: E0213 15:32:16.460522 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.460776 kubelet[2663]: E0213 15:32:16.460754 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.460776 kubelet[2663]: W0213 15:32:16.460770 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.460845 kubelet[2663]: E0213 15:32:16.460783 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.461006 kubelet[2663]: E0213 15:32:16.460985 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.461046 kubelet[2663]: W0213 15:32:16.461022 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.461046 kubelet[2663]: E0213 15:32:16.461039 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.461576 kubelet[2663]: E0213 15:32:16.461316 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.461576 kubelet[2663]: W0213 15:32:16.461354 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.461576 kubelet[2663]: E0213 15:32:16.461365 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.461664 kubelet[2663]: E0213 15:32:16.461595 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.461664 kubelet[2663]: W0213 15:32:16.461604 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.461664 kubelet[2663]: E0213 15:32:16.461614 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.461878 kubelet[2663]: E0213 15:32:16.461854 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.461931 kubelet[2663]: W0213 15:32:16.461889 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.461959 kubelet[2663]: E0213 15:32:16.461934 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.462866 kubelet[2663]: E0213 15:32:16.462842 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.462866 kubelet[2663]: W0213 15:32:16.462858 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.462866 kubelet[2663]: E0213 15:32:16.462868 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.465656 kubelet[2663]: E0213 15:32:16.465629 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.465656 kubelet[2663]: W0213 15:32:16.465646 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.465656 kubelet[2663]: E0213 15:32:16.465657 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.466372 kubelet[2663]: E0213 15:32:16.466149 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.466372 kubelet[2663]: W0213 15:32:16.466163 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.466372 kubelet[2663]: E0213 15:32:16.466177 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.467122 kubelet[2663]: E0213 15:32:16.467099 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.467122 kubelet[2663]: W0213 15:32:16.467115 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.467203 kubelet[2663]: E0213 15:32:16.467179 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.467402 kubelet[2663]: E0213 15:32:16.467369 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.467402 kubelet[2663]: W0213 15:32:16.467401 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.467459 kubelet[2663]: E0213 15:32:16.467447 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.467705 kubelet[2663]: E0213 15:32:16.467683 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.467705 kubelet[2663]: W0213 15:32:16.467698 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.467772 kubelet[2663]: E0213 15:32:16.467761 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.468016 kubelet[2663]: E0213 15:32:16.467995 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.468016 kubelet[2663]: W0213 15:32:16.468010 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.468077 kubelet[2663]: E0213 15:32:16.468061 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.468534 kubelet[2663]: E0213 15:32:16.468222 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.468534 kubelet[2663]: W0213 15:32:16.468247 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.468534 kubelet[2663]: E0213 15:32:16.468297 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.468534 kubelet[2663]: E0213 15:32:16.468497 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.468534 kubelet[2663]: W0213 15:32:16.468505 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.468667 kubelet[2663]: E0213 15:32:16.468556 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.468819 kubelet[2663]: E0213 15:32:16.468798 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.468819 kubelet[2663]: W0213 15:32:16.468813 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.468879 kubelet[2663]: E0213 15:32:16.468868 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.469089 kubelet[2663]: E0213 15:32:16.469067 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.469089 kubelet[2663]: W0213 15:32:16.469082 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.469148 kubelet[2663]: E0213 15:32:16.469110 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.470158 kubelet[2663]: E0213 15:32:16.470039 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.470158 kubelet[2663]: W0213 15:32:16.470053 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.470457 kubelet[2663]: E0213 15:32:16.470434 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.470707 kubelet[2663]: E0213 15:32:16.470685 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.470707 kubelet[2663]: W0213 15:32:16.470697 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.470779 kubelet[2663]: E0213 15:32:16.470761 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.470976 kubelet[2663]: E0213 15:32:16.470961 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.470976 kubelet[2663]: W0213 15:32:16.470973 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.471038 kubelet[2663]: E0213 15:32:16.471005 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.471203 kubelet[2663]: E0213 15:32:16.471182 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.471203 kubelet[2663]: W0213 15:32:16.471193 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.471248 kubelet[2663]: E0213 15:32:16.471221 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.471413 kubelet[2663]: E0213 15:32:16.471399 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.471413 kubelet[2663]: W0213 15:32:16.471409 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.471483 kubelet[2663]: E0213 15:32:16.471451 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.471945 kubelet[2663]: E0213 15:32:16.471791 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.471945 kubelet[2663]: W0213 15:32:16.471886 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.472067 kubelet[2663]: E0213 15:32:16.472044 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.472634 kubelet[2663]: E0213 15:32:16.472587 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.472634 kubelet[2663]: W0213 15:32:16.472600 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.472634 kubelet[2663]: E0213 15:32:16.472613 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.473282 kubelet[2663]: E0213 15:32:16.472956 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.473282 kubelet[2663]: W0213 15:32:16.472983 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.473969 kubelet[2663]: E0213 15:32:16.473944 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.475503 kubelet[2663]: E0213 15:32:16.475359 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.475503 kubelet[2663]: W0213 15:32:16.475373 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.475503 kubelet[2663]: E0213 15:32:16.475392 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.475940 kubelet[2663]: E0213 15:32:16.475923 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.475940 kubelet[2663]: W0213 15:32:16.475935 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.475995 kubelet[2663]: E0213 15:32:16.475950 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.476145 kubelet[2663]: E0213 15:32:16.476130 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.476145 kubelet[2663]: W0213 15:32:16.476144 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.476200 kubelet[2663]: E0213 15:32:16.476158 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.476411 kubelet[2663]: E0213 15:32:16.476395 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.476411 kubelet[2663]: W0213 15:32:16.476407 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.476479 kubelet[2663]: E0213 15:32:16.476422 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.476963 kubelet[2663]: E0213 15:32:16.476703 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.476963 kubelet[2663]: W0213 15:32:16.476717 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.476963 kubelet[2663]: E0213 15:32:16.476727 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.476963 kubelet[2663]: E0213 15:32:16.476887 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.476963 kubelet[2663]: W0213 15:32:16.476894 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.476963 kubelet[2663]: E0213 15:32:16.476915 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.477160 kubelet[2663]: E0213 15:32:16.477125 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.477160 kubelet[2663]: W0213 15:32:16.477140 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.477160 kubelet[2663]: E0213 15:32:16.477149 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.552925 kubelet[2663]: E0213 15:32:16.552809 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.552925 kubelet[2663]: W0213 15:32:16.552828 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.552925 kubelet[2663]: E0213 15:32:16.552848 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.553171 kubelet[2663]: E0213 15:32:16.553078 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.553171 kubelet[2663]: W0213 15:32:16.553086 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.553171 kubelet[2663]: E0213 15:32:16.553097 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.554042 kubelet[2663]: E0213 15:32:16.554027 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.554042 kubelet[2663]: W0213 15:32:16.554038 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.554111 kubelet[2663]: E0213 15:32:16.554053 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.554275 kubelet[2663]: E0213 15:32:16.554264 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.554275 kubelet[2663]: W0213 15:32:16.554273 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.554374 kubelet[2663]: E0213 15:32:16.554285 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.554677 kubelet[2663]: E0213 15:32:16.554662 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.554677 kubelet[2663]: W0213 15:32:16.554673 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.554751 kubelet[2663]: E0213 15:32:16.554740 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.554891 kubelet[2663]: E0213 15:32:16.554879 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.554891 kubelet[2663]: W0213 15:32:16.554889 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.555022 kubelet[2663]: E0213 15:32:16.555004 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.555082 kubelet[2663]: E0213 15:32:16.555069 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.555082 kubelet[2663]: W0213 15:32:16.555080 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.555153 kubelet[2663]: E0213 15:32:16.555131 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.555444 kubelet[2663]: E0213 15:32:16.555414 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.555592 kubelet[2663]: W0213 15:32:16.555488 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.555592 kubelet[2663]: E0213 15:32:16.555515 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.555927 kubelet[2663]: E0213 15:32:16.555919 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.555966 kubelet[2663]: W0213 15:32:16.555927 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.555966 kubelet[2663]: E0213 15:32:16.555948 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.556159 kubelet[2663]: E0213 15:32:16.556131 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.556159 kubelet[2663]: W0213 15:32:16.556141 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.556358 kubelet[2663]: E0213 15:32:16.556195 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.556445 kubelet[2663]: E0213 15:32:16.556432 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.556495 kubelet[2663]: W0213 15:32:16.556466 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.556573 kubelet[2663]: E0213 15:32:16.556547 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.556700 kubelet[2663]: E0213 15:32:16.556683 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.556700 kubelet[2663]: W0213 15:32:16.556695 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.557039 kubelet[2663]: E0213 15:32:16.556731 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.557039 kubelet[2663]: E0213 15:32:16.556919 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.557039 kubelet[2663]: W0213 15:32:16.556927 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.557039 kubelet[2663]: E0213 15:32:16.556983 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.557206 kubelet[2663]: E0213 15:32:16.557123 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.557206 kubelet[2663]: W0213 15:32:16.557129 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.557206 kubelet[2663]: E0213 15:32:16.557151 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.557374 kubelet[2663]: E0213 15:32:16.557361 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.557374 kubelet[2663]: W0213 15:32:16.557371 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.557466 kubelet[2663]: E0213 15:32:16.557392 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.557669 kubelet[2663]: E0213 15:32:16.557637 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.557669 kubelet[2663]: W0213 15:32:16.557668 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.557724 kubelet[2663]: E0213 15:32:16.557685 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.557947 kubelet[2663]: E0213 15:32:16.557934 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.557947 kubelet[2663]: W0213 15:32:16.557944 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.558010 kubelet[2663]: E0213 15:32:16.557957 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.558179 kubelet[2663]: E0213 15:32:16.558165 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.558204 kubelet[2663]: W0213 15:32:16.558178 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.558238 kubelet[2663]: E0213 15:32:16.558218 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.558400 kubelet[2663]: E0213 15:32:16.558379 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.558400 kubelet[2663]: W0213 15:32:16.558397 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.558507 kubelet[2663]: E0213 15:32:16.558451 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.558607 kubelet[2663]: E0213 15:32:16.558597 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.558607 kubelet[2663]: W0213 15:32:16.558606 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.558704 kubelet[2663]: E0213 15:32:16.558658 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.558781 kubelet[2663]: E0213 15:32:16.558770 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.558806 kubelet[2663]: W0213 15:32:16.558779 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.558918 kubelet[2663]: E0213 15:32:16.558841 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.559011 kubelet[2663]: E0213 15:32:16.558977 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.559049 kubelet[2663]: W0213 15:32:16.559014 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.559049 kubelet[2663]: E0213 15:32:16.559028 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.559320 kubelet[2663]: E0213 15:32:16.559306 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.559320 kubelet[2663]: W0213 15:32:16.559317 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.559401 kubelet[2663]: E0213 15:32:16.559332 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.559559 kubelet[2663]: E0213 15:32:16.559547 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.559559 kubelet[2663]: W0213 15:32:16.559556 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.559613 kubelet[2663]: E0213 15:32:16.559569 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.559854 kubelet[2663]: E0213 15:32:16.559833 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.559854 kubelet[2663]: W0213 15:32:16.559845 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.559936 kubelet[2663]: E0213 15:32:16.559857 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.566667 kubelet[2663]: E0213 15:32:16.566608 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:16.566667 kubelet[2663]: W0213 15:32:16.566624 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:16.566667 kubelet[2663]: E0213 15:32:16.566639 2663 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:16.596147 kubelet[2663]: E0213 15:32:16.596116 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:16.596632 containerd[1468]: time="2025-02-13T15:32:16.596601774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c95954f6b-89mvg,Uid:d1c0c6f1-0f71-43b5-aba9-ae27b1317f7d,Namespace:calico-system,Attempt:0,}" Feb 13 15:32:16.637310 kubelet[2663]: E0213 15:32:16.637283 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:16.637939 containerd[1468]: time="2025-02-13T15:32:16.637586444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-svldr,Uid:a055e2cb-c619-4455-bad5-bfe0e9bf622c,Namespace:calico-system,Attempt:0,}" Feb 13 15:32:16.809559 containerd[1468]: time="2025-02-13T15:32:16.809157944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:16.809559 containerd[1468]: time="2025-02-13T15:32:16.809270566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:16.809559 containerd[1468]: time="2025-02-13T15:32:16.809287197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:16.809559 containerd[1468]: time="2025-02-13T15:32:16.809365304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:16.812095 containerd[1468]: time="2025-02-13T15:32:16.811950188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:16.812095 containerd[1468]: time="2025-02-13T15:32:16.812005492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:16.812095 containerd[1468]: time="2025-02-13T15:32:16.812028606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:16.812315 containerd[1468]: time="2025-02-13T15:32:16.812264971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:16.836125 systemd[1]: Started cri-containerd-4dc80ff188280b7b49834efa7ffe2768f884f75debf10b15fbdd1567a7d794ca.scope - libcontainer container 4dc80ff188280b7b49834efa7ffe2768f884f75debf10b15fbdd1567a7d794ca. Feb 13 15:32:16.839403 systemd[1]: Started cri-containerd-cfce385a0d45be6ced4449987bfd94be221feb5367a31c5509f51ec7e337023e.scope - libcontainer container cfce385a0d45be6ced4449987bfd94be221feb5367a31c5509f51ec7e337023e. Feb 13 15:32:16.861935 containerd[1468]: time="2025-02-13T15:32:16.861630812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-svldr,Uid:a055e2cb-c619-4455-bad5-bfe0e9bf622c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4dc80ff188280b7b49834efa7ffe2768f884f75debf10b15fbdd1567a7d794ca\"" Feb 13 15:32:16.862272 kubelet[2663]: E0213 15:32:16.862199 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:16.863136 containerd[1468]: time="2025-02-13T15:32:16.863116293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:32:16.878502 containerd[1468]: time="2025-02-13T15:32:16.878458679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c95954f6b-89mvg,Uid:d1c0c6f1-0f71-43b5-aba9-ae27b1317f7d,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfce385a0d45be6ced4449987bfd94be221feb5367a31c5509f51ec7e337023e\"" Feb 13 15:32:16.879956 kubelet[2663]: E0213 15:32:16.879935 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:18.125838 kubelet[2663]: E0213 15:32:18.125786 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xbpjc" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" Feb 13 15:32:18.470180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3145648266.mount: Deactivated successfully. Feb 13 15:32:18.544089 containerd[1468]: time="2025-02-13T15:32:18.544027992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:18.544691 containerd[1468]: time="2025-02-13T15:32:18.544637039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 15:32:18.545725 containerd[1468]: time="2025-02-13T15:32:18.545682338Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:18.547646 containerd[1468]: time="2025-02-13T15:32:18.547608468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:18.548163 containerd[1468]: time="2025-02-13T15:32:18.548126072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.68498366s" Feb 13 15:32:18.548198 containerd[1468]: time="2025-02-13T15:32:18.548161961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 15:32:18.549021 containerd[1468]: time="2025-02-13T15:32:18.548991433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:32:18.550473 containerd[1468]: time="2025-02-13T15:32:18.550434773Z" level=info msg="CreateContainer within sandbox \"4dc80ff188280b7b49834efa7ffe2768f884f75debf10b15fbdd1567a7d794ca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:32:18.582481 containerd[1468]: time="2025-02-13T15:32:18.582444683Z" level=info msg="CreateContainer within sandbox \"4dc80ff188280b7b49834efa7ffe2768f884f75debf10b15fbdd1567a7d794ca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"31e8298a8182366787c6841d99d40017d752576d722ab1520c2581ac66ff79a7\"" Feb 13 15:32:18.582991 containerd[1468]: time="2025-02-13T15:32:18.582934326Z" level=info msg="StartContainer for \"31e8298a8182366787c6841d99d40017d752576d722ab1520c2581ac66ff79a7\"" Feb 13 15:32:18.614079 systemd[1]: Started cri-containerd-31e8298a8182366787c6841d99d40017d752576d722ab1520c2581ac66ff79a7.scope - libcontainer container 31e8298a8182366787c6841d99d40017d752576d722ab1520c2581ac66ff79a7. Feb 13 15:32:18.645696 containerd[1468]: time="2025-02-13T15:32:18.645638148Z" level=info msg="StartContainer for \"31e8298a8182366787c6841d99d40017d752576d722ab1520c2581ac66ff79a7\" returns successfully" Feb 13 15:32:18.657154 systemd[1]: cri-containerd-31e8298a8182366787c6841d99d40017d752576d722ab1520c2581ac66ff79a7.scope: Deactivated successfully. Feb 13 15:32:18.968776 containerd[1468]: time="2025-02-13T15:32:18.968706799Z" level=info msg="shim disconnected" id=31e8298a8182366787c6841d99d40017d752576d722ab1520c2581ac66ff79a7 namespace=k8s.io Feb 13 15:32:18.968776 containerd[1468]: time="2025-02-13T15:32:18.968775219Z" level=warning msg="cleaning up after shim disconnected" id=31e8298a8182366787c6841d99d40017d752576d722ab1520c2581ac66ff79a7 namespace=k8s.io Feb 13 15:32:18.968776 containerd[1468]: time="2025-02-13T15:32:18.968784336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:32:19.184270 kubelet[2663]: E0213 15:32:19.184222 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:19.470618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31e8298a8182366787c6841d99d40017d752576d722ab1520c2581ac66ff79a7-rootfs.mount: Deactivated successfully. Feb 13 15:32:20.126264 kubelet[2663]: E0213 15:32:20.126224 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xbpjc" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" Feb 13 15:32:20.839537 containerd[1468]: time="2025-02-13T15:32:20.839489440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:20.840195 containerd[1468]: time="2025-02-13T15:32:20.840141619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 15:32:20.841196 containerd[1468]: time="2025-02-13T15:32:20.841164224Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:20.843352 containerd[1468]: time="2025-02-13T15:32:20.843302892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:20.843943 containerd[1468]: time="2025-02-13T15:32:20.843888865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.294865362s" Feb 13 15:32:20.843943 containerd[1468]: time="2025-02-13T15:32:20.843928740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 15:32:20.845090 containerd[1468]: time="2025-02-13T15:32:20.844965161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:32:20.853042 containerd[1468]: time="2025-02-13T15:32:20.852965266Z" level=info msg="CreateContainer within sandbox \"cfce385a0d45be6ced4449987bfd94be221feb5367a31c5509f51ec7e337023e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:32:20.867683 containerd[1468]: time="2025-02-13T15:32:20.867636232Z" level=info msg="CreateContainer within sandbox \"cfce385a0d45be6ced4449987bfd94be221feb5367a31c5509f51ec7e337023e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"260edf5115f2189d2b91653b4a486d600cfc1172d109c0fd572bb2b1466c631b\"" Feb 13 15:32:20.868308 containerd[1468]: time="2025-02-13T15:32:20.868093384Z" level=info msg="StartContainer for \"260edf5115f2189d2b91653b4a486d600cfc1172d109c0fd572bb2b1466c631b\"" Feb 13 15:32:20.906208 systemd[1]: Started cri-containerd-260edf5115f2189d2b91653b4a486d600cfc1172d109c0fd572bb2b1466c631b.scope - libcontainer container 260edf5115f2189d2b91653b4a486d600cfc1172d109c0fd572bb2b1466c631b. Feb 13 15:32:21.024633 containerd[1468]: time="2025-02-13T15:32:21.024570268Z" level=info msg="StartContainer for \"260edf5115f2189d2b91653b4a486d600cfc1172d109c0fd572bb2b1466c631b\" returns successfully" Feb 13 15:32:21.189867 kubelet[2663]: E0213 15:32:21.189741 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:21.319028 kubelet[2663]: I0213 15:32:21.318832 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c95954f6b-89mvg" podStartSLOduration=1.354849414 podStartE2EDuration="5.318812553s" podCreationTimestamp="2025-02-13 15:32:16 +0000 UTC" firstStartedPulling="2025-02-13 15:32:16.880826492 +0000 UTC m=+22.839863087" lastFinishedPulling="2025-02-13 15:32:20.844789641 +0000 UTC m=+26.803826226" observedRunningTime="2025-02-13 15:32:21.318796122 +0000 UTC m=+27.277832717" watchObservedRunningTime="2025-02-13 15:32:21.318812553 +0000 UTC m=+27.277849148" Feb 13 15:32:22.126605 kubelet[2663]: E0213 15:32:22.126544 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xbpjc" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" Feb 13 15:32:22.190927 kubelet[2663]: I0213 15:32:22.190867 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:32:22.191465 kubelet[2663]: E0213 15:32:22.191443 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:23.813255 systemd[1]: Started sshd@9-10.0.0.113:22-10.0.0.1:59128.service - OpenSSH per-connection server daemon (10.0.0.1:59128). Feb 13 15:32:23.856511 sshd[3331]: Accepted publickey for core from 10.0.0.1 port 59128 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:23.858239 sshd-session[3331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:23.862720 systemd-logind[1451]: New session 10 of user core. Feb 13 15:32:23.879017 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:32:23.996102 sshd[3333]: Connection closed by 10.0.0.1 port 59128 Feb 13 15:32:23.996528 sshd-session[3331]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:24.000413 systemd[1]: sshd@9-10.0.0.113:22-10.0.0.1:59128.service: Deactivated successfully. Feb 13 15:32:24.002130 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:32:24.002676 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:32:24.003545 systemd-logind[1451]: Removed session 10. Feb 13 15:32:24.126785 kubelet[2663]: E0213 15:32:24.126628 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xbpjc" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" Feb 13 15:32:26.109348 containerd[1468]: time="2025-02-13T15:32:26.109293229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:26.110048 containerd[1468]: time="2025-02-13T15:32:26.109969741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 15:32:26.111126 containerd[1468]: time="2025-02-13T15:32:26.111078226Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:26.113460 containerd[1468]: time="2025-02-13T15:32:26.113416653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:26.113997 containerd[1468]: time="2025-02-13T15:32:26.113963111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.268968924s" Feb 13 15:32:26.113997 containerd[1468]: time="2025-02-13T15:32:26.113993839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 15:32:26.116624 containerd[1468]: time="2025-02-13T15:32:26.116599579Z" level=info msg="CreateContainer within sandbox \"4dc80ff188280b7b49834efa7ffe2768f884f75debf10b15fbdd1567a7d794ca\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:32:26.126516 kubelet[2663]: E0213 15:32:26.126445 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xbpjc" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" Feb 13 15:32:26.134647 containerd[1468]: time="2025-02-13T15:32:26.134610070Z" level=info msg="CreateContainer within sandbox \"4dc80ff188280b7b49834efa7ffe2768f884f75debf10b15fbdd1567a7d794ca\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1eded18656dce5cae035c1169722cb521fd5b6179f28c5a20caf67a1f6652e77\"" Feb 13 15:32:26.135006 containerd[1468]: time="2025-02-13T15:32:26.134962482Z" level=info msg="StartContainer for \"1eded18656dce5cae035c1169722cb521fd5b6179f28c5a20caf67a1f6652e77\"" Feb 13 15:32:26.164126 systemd[1]: Started cri-containerd-1eded18656dce5cae035c1169722cb521fd5b6179f28c5a20caf67a1f6652e77.scope - libcontainer container 1eded18656dce5cae035c1169722cb521fd5b6179f28c5a20caf67a1f6652e77. Feb 13 15:32:26.195211 containerd[1468]: time="2025-02-13T15:32:26.195117740Z" level=info msg="StartContainer for \"1eded18656dce5cae035c1169722cb521fd5b6179f28c5a20caf67a1f6652e77\" returns successfully" Feb 13 15:32:26.205263 kubelet[2663]: E0213 15:32:26.205228 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:27.206374 kubelet[2663]: E0213 15:32:27.206321 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:27.446468 containerd[1468]: time="2025-02-13T15:32:27.446410953Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:32:27.449417 systemd[1]: cri-containerd-1eded18656dce5cae035c1169722cb521fd5b6179f28c5a20caf67a1f6652e77.scope: Deactivated successfully. Feb 13 15:32:27.470643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1eded18656dce5cae035c1169722cb521fd5b6179f28c5a20caf67a1f6652e77-rootfs.mount: Deactivated successfully. Feb 13 15:32:27.489233 kubelet[2663]: I0213 15:32:27.489152 2663 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:32:27.562351 kubelet[2663]: I0213 15:32:27.562306 2663 topology_manager.go:215] "Topology Admit Handler" podUID="bde67ccf-8db2-4a00-9ea9-11acd382b495" podNamespace="calico-apiserver" podName="calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:27.562584 kubelet[2663]: I0213 15:32:27.562556 2663 topology_manager.go:215] "Topology Admit Handler" podUID="1d0e938c-1376-4e25-a332-b48365cd1ce4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:27.562733 kubelet[2663]: I0213 15:32:27.562677 2663 topology_manager.go:215] "Topology Admit Handler" podUID="69e7b0a9-0728-4031-b89b-c2766dc8da1b" podNamespace="calico-apiserver" podName="calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:27.563413 kubelet[2663]: I0213 15:32:27.563392 2663 topology_manager.go:215] "Topology Admit Handler" podUID="6a4955fe-2fb1-4e1b-a558-1a75615b1f9d" podNamespace="calico-system" podName="calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:27.563510 kubelet[2663]: I0213 15:32:27.563496 2663 topology_manager.go:215] "Topology Admit Handler" podUID="f93cafd2-d36c-4948-9ac9-90d542cbe206" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dczc4" Feb 13 15:32:27.565421 containerd[1468]: time="2025-02-13T15:32:27.565159531Z" level=info msg="shim disconnected" id=1eded18656dce5cae035c1169722cb521fd5b6179f28c5a20caf67a1f6652e77 namespace=k8s.io Feb 13 15:32:27.565421 containerd[1468]: time="2025-02-13T15:32:27.565230546Z" level=warning msg="cleaning up after shim disconnected" id=1eded18656dce5cae035c1169722cb521fd5b6179f28c5a20caf67a1f6652e77 namespace=k8s.io Feb 13 15:32:27.565421 containerd[1468]: time="2025-02-13T15:32:27.565241847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:32:27.574511 systemd[1]: Created slice kubepods-burstable-pod1d0e938c_1376_4e25_a332_b48365cd1ce4.slice - libcontainer container kubepods-burstable-pod1d0e938c_1376_4e25_a332_b48365cd1ce4.slice. Feb 13 15:32:27.581667 systemd[1]: Created slice kubepods-besteffort-podbde67ccf_8db2_4a00_9ea9_11acd382b495.slice - libcontainer container kubepods-besteffort-podbde67ccf_8db2_4a00_9ea9_11acd382b495.slice. Feb 13 15:32:27.591772 systemd[1]: Created slice kubepods-burstable-podf93cafd2_d36c_4948_9ac9_90d542cbe206.slice - libcontainer container kubepods-burstable-podf93cafd2_d36c_4948_9ac9_90d542cbe206.slice. Feb 13 15:32:27.599690 systemd[1]: Created slice kubepods-besteffort-pod69e7b0a9_0728_4031_b89b_c2766dc8da1b.slice - libcontainer container kubepods-besteffort-pod69e7b0a9_0728_4031_b89b_c2766dc8da1b.slice. Feb 13 15:32:27.604893 systemd[1]: Created slice kubepods-besteffort-pod6a4955fe_2fb1_4e1b_a558_1a75615b1f9d.slice - libcontainer container kubepods-besteffort-pod6a4955fe_2fb1_4e1b_a558_1a75615b1f9d.slice. Feb 13 15:32:27.728626 kubelet[2663]: I0213 15:32:27.728490 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm2v9\" (UniqueName: \"kubernetes.io/projected/6a4955fe-2fb1-4e1b-a558-1a75615b1f9d-kube-api-access-nm2v9\") pod \"calico-kube-controllers-7466fbdd8f-crh5j\" (UID: \"6a4955fe-2fb1-4e1b-a558-1a75615b1f9d\") " pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:27.728626 kubelet[2663]: I0213 15:32:27.728542 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f93cafd2-d36c-4948-9ac9-90d542cbe206-config-volume\") pod \"coredns-7db6d8ff4d-dczc4\" (UID: \"f93cafd2-d36c-4948-9ac9-90d542cbe206\") " pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:27.728626 kubelet[2663]: I0213 15:32:27.728568 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bde67ccf-8db2-4a00-9ea9-11acd382b495-calico-apiserver-certs\") pod \"calico-apiserver-5bd68f4995-w55w7\" (UID: \"bde67ccf-8db2-4a00-9ea9-11acd382b495\") " pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:27.728626 kubelet[2663]: I0213 15:32:27.728589 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ppqx\" (UniqueName: \"kubernetes.io/projected/69e7b0a9-0728-4031-b89b-c2766dc8da1b-kube-api-access-2ppqx\") pod \"calico-apiserver-5bd68f4995-bdz7b\" (UID: \"69e7b0a9-0728-4031-b89b-c2766dc8da1b\") " pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:27.728626 kubelet[2663]: I0213 15:32:27.728611 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/69e7b0a9-0728-4031-b89b-c2766dc8da1b-calico-apiserver-certs\") pod \"calico-apiserver-5bd68f4995-bdz7b\" (UID: \"69e7b0a9-0728-4031-b89b-c2766dc8da1b\") " pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:27.728852 kubelet[2663]: I0213 15:32:27.728634 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a4955fe-2fb1-4e1b-a558-1a75615b1f9d-tigera-ca-bundle\") pod \"calico-kube-controllers-7466fbdd8f-crh5j\" (UID: \"6a4955fe-2fb1-4e1b-a558-1a75615b1f9d\") " pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:27.728852 kubelet[2663]: I0213 15:32:27.728658 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d0e938c-1376-4e25-a332-b48365cd1ce4-config-volume\") pod \"coredns-7db6d8ff4d-5jfrp\" (UID: \"1d0e938c-1376-4e25-a332-b48365cd1ce4\") " pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:27.728852 kubelet[2663]: I0213 15:32:27.728681 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-294vw\" (UniqueName: \"kubernetes.io/projected/bde67ccf-8db2-4a00-9ea9-11acd382b495-kube-api-access-294vw\") pod \"calico-apiserver-5bd68f4995-w55w7\" (UID: \"bde67ccf-8db2-4a00-9ea9-11acd382b495\") " pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:27.728852 kubelet[2663]: I0213 15:32:27.728698 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g8cn\" (UniqueName: \"kubernetes.io/projected/f93cafd2-d36c-4948-9ac9-90d542cbe206-kube-api-access-6g8cn\") pod \"coredns-7db6d8ff4d-dczc4\" (UID: \"f93cafd2-d36c-4948-9ac9-90d542cbe206\") " pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:27.728852 kubelet[2663]: I0213 15:32:27.728718 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6796\" (UniqueName: \"kubernetes.io/projected/1d0e938c-1376-4e25-a332-b48365cd1ce4-kube-api-access-z6796\") pod \"coredns-7db6d8ff4d-5jfrp\" (UID: \"1d0e938c-1376-4e25-a332-b48365cd1ce4\") " pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:27.878672 kubelet[2663]: E0213 15:32:27.878639 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:27.879297 containerd[1468]: time="2025-02-13T15:32:27.879258450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:27.887375 containerd[1468]: time="2025-02-13T15:32:27.887346398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:32:27.895802 kubelet[2663]: E0213 15:32:27.895764 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:27.896354 containerd[1468]: time="2025-02-13T15:32:27.896281939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:27.903783 containerd[1468]: time="2025-02-13T15:32:27.903610859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:32:27.910511 containerd[1468]: time="2025-02-13T15:32:27.910176254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:0,}" Feb 13 15:32:27.994244 containerd[1468]: time="2025-02-13T15:32:27.994087138Z" level=error msg="Failed to destroy network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:27.994713 containerd[1468]: time="2025-02-13T15:32:27.994512227Z" level=error msg="encountered an error cleaning up failed sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:27.994713 containerd[1468]: time="2025-02-13T15:32:27.994593621Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.013009 containerd[1468]: time="2025-02-13T15:32:28.012937878Z" level=error msg="Failed to destroy network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.013413 containerd[1468]: time="2025-02-13T15:32:28.013388035Z" level=error msg="encountered an error cleaning up failed sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.013475 containerd[1468]: time="2025-02-13T15:32:28.013453799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.016775 kubelet[2663]: E0213 15:32:28.015577 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.016775 kubelet[2663]: E0213 15:32:28.015655 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:28.016775 kubelet[2663]: E0213 15:32:28.015676 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:28.016775 kubelet[2663]: E0213 15:32:28.015598 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.017054 kubelet[2663]: E0213 15:32:28.015722 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5jfrp_kube-system(1d0e938c-1376-4e25-a332-b48365cd1ce4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5jfrp_kube-system(1d0e938c-1376-4e25-a332-b48365cd1ce4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5jfrp" podUID="1d0e938c-1376-4e25-a332-b48365cd1ce4" Feb 13 15:32:28.017054 kubelet[2663]: E0213 15:32:28.015743 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:28.017054 kubelet[2663]: E0213 15:32:28.015761 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:28.017201 kubelet[2663]: E0213 15:32:28.015823 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd68f4995-w55w7_calico-apiserver(bde67ccf-8db2-4a00-9ea9-11acd382b495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd68f4995-w55w7_calico-apiserver(bde67ccf-8db2-4a00-9ea9-11acd382b495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" podUID="bde67ccf-8db2-4a00-9ea9-11acd382b495" Feb 13 15:32:28.026504 containerd[1468]: time="2025-02-13T15:32:28.026453116Z" level=error msg="Failed to destroy network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.026896 containerd[1468]: time="2025-02-13T15:32:28.026868828Z" level=error msg="encountered an error cleaning up failed sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.026980 containerd[1468]: time="2025-02-13T15:32:28.026956062Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.027350 kubelet[2663]: E0213 15:32:28.027154 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.027350 kubelet[2663]: E0213 15:32:28.027214 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:28.027350 kubelet[2663]: E0213 15:32:28.027248 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:28.027443 kubelet[2663]: E0213 15:32:28.027291 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dczc4_kube-system(f93cafd2-d36c-4948-9ac9-90d542cbe206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dczc4_kube-system(f93cafd2-d36c-4948-9ac9-90d542cbe206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dczc4" podUID="f93cafd2-d36c-4948-9ac9-90d542cbe206" Feb 13 15:32:28.030693 containerd[1468]: time="2025-02-13T15:32:28.030656850Z" level=error msg="Failed to destroy network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.031159 containerd[1468]: time="2025-02-13T15:32:28.031126753Z" level=error msg="encountered an error cleaning up failed sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.031359 containerd[1468]: time="2025-02-13T15:32:28.031335856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.031595 kubelet[2663]: E0213 15:32:28.031528 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.031595 kubelet[2663]: E0213 15:32:28.031563 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:28.031595 kubelet[2663]: E0213 15:32:28.031579 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:28.031824 kubelet[2663]: E0213 15:32:28.031614 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd68f4995-bdz7b_calico-apiserver(69e7b0a9-0728-4031-b89b-c2766dc8da1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd68f4995-bdz7b_calico-apiserver(69e7b0a9-0728-4031-b89b-c2766dc8da1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" podUID="69e7b0a9-0728-4031-b89b-c2766dc8da1b" Feb 13 15:32:28.031882 containerd[1468]: time="2025-02-13T15:32:28.031609430Z" level=error msg="Failed to destroy network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.031961 containerd[1468]: time="2025-02-13T15:32:28.031937277Z" level=error msg="encountered an error cleaning up failed sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.031996 containerd[1468]: time="2025-02-13T15:32:28.031973295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.032166 kubelet[2663]: E0213 15:32:28.032132 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.032216 kubelet[2663]: E0213 15:32:28.032197 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:28.032241 kubelet[2663]: E0213 15:32:28.032218 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:28.032291 kubelet[2663]: E0213 15:32:28.032262 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7466fbdd8f-crh5j_calico-system(6a4955fe-2fb1-4e1b-a558-1a75615b1f9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7466fbdd8f-crh5j_calico-system(6a4955fe-2fb1-4e1b-a558-1a75615b1f9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" podUID="6a4955fe-2fb1-4e1b-a558-1a75615b1f9d" Feb 13 15:32:28.133100 systemd[1]: Created slice kubepods-besteffort-pod757af110_1c95_44e4_a60e_64cc5c9b9a1e.slice - libcontainer container kubepods-besteffort-pod757af110_1c95_44e4_a60e_64cc5c9b9a1e.slice. Feb 13 15:32:28.135268 containerd[1468]: time="2025-02-13T15:32:28.135219622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:0,}" Feb 13 15:32:28.200601 containerd[1468]: time="2025-02-13T15:32:28.200524753Z" level=error msg="Failed to destroy network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.200934 containerd[1468]: time="2025-02-13T15:32:28.200890942Z" level=error msg="encountered an error cleaning up failed sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.200985 containerd[1468]: time="2025-02-13T15:32:28.200959831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.201262 kubelet[2663]: E0213 15:32:28.201207 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.201323 kubelet[2663]: E0213 15:32:28.201279 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:28.201323 kubelet[2663]: E0213 15:32:28.201305 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:28.201411 kubelet[2663]: E0213 15:32:28.201379 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xbpjc_calico-system(757af110-1c95-44e4-a60e-64cc5c9b9a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xbpjc_calico-system(757af110-1c95-44e4-a60e-64cc5c9b9a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xbpjc" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" Feb 13 15:32:28.208944 kubelet[2663]: I0213 15:32:28.208923 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6" Feb 13 15:32:28.210398 kubelet[2663]: I0213 15:32:28.210288 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903" Feb 13 15:32:28.212052 kubelet[2663]: I0213 15:32:28.211570 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793" Feb 13 15:32:28.212340 kubelet[2663]: I0213 15:32:28.212317 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5" Feb 13 15:32:28.214538 containerd[1468]: time="2025-02-13T15:32:28.214466324Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\"" Feb 13 15:32:28.214538 containerd[1468]: time="2025-02-13T15:32:28.214496660Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\"" Feb 13 15:32:28.215195 containerd[1468]: time="2025-02-13T15:32:28.214353020Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\"" Feb 13 15:32:28.215990 kubelet[2663]: I0213 15:32:28.215046 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5" Feb 13 15:32:28.216089 containerd[1468]: time="2025-02-13T15:32:28.215200954Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\"" Feb 13 15:32:28.218155 containerd[1468]: time="2025-02-13T15:32:28.218114020Z" level=info msg="Ensure that sandbox 3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903 in task-service has been cleanup successfully" Feb 13 15:32:28.218209 containerd[1468]: time="2025-02-13T15:32:28.218165097Z" level=info msg="Ensure that sandbox d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6 in task-service has been cleanup successfully" Feb 13 15:32:28.218539 containerd[1468]: time="2025-02-13T15:32:28.218272338Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\"" Feb 13 15:32:28.218539 containerd[1468]: time="2025-02-13T15:32:28.218412442Z" level=info msg="TearDown network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" successfully" Feb 13 15:32:28.218539 containerd[1468]: time="2025-02-13T15:32:28.218426308Z" level=info msg="TearDown network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" successfully" Feb 13 15:32:28.218539 containerd[1468]: time="2025-02-13T15:32:28.218445324Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" returns successfully" Feb 13 15:32:28.218539 containerd[1468]: time="2025-02-13T15:32:28.218430636Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" returns successfully" Feb 13 15:32:28.218677 containerd[1468]: time="2025-02-13T15:32:28.218126143Z" level=info msg="Ensure that sandbox 46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5 in task-service has been cleanup successfully" Feb 13 15:32:28.218677 containerd[1468]: time="2025-02-13T15:32:28.218436717Z" level=info msg="Ensure that sandbox 1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5 in task-service has been cleanup successfully" Feb 13 15:32:28.219020 kubelet[2663]: E0213 15:32:28.218956 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:28.219170 containerd[1468]: time="2025-02-13T15:32:28.219146572Z" level=info msg="TearDown network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" successfully" Feb 13 15:32:28.219214 containerd[1468]: time="2025-02-13T15:32:28.219168833Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" returns successfully" Feb 13 15:32:28.219309 containerd[1468]: time="2025-02-13T15:32:28.219284371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:1,}" Feb 13 15:32:28.219352 containerd[1468]: time="2025-02-13T15:32:28.219330998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:1,}" Feb 13 15:32:28.219733 containerd[1468]: time="2025-02-13T15:32:28.219703549Z" level=info msg="Ensure that sandbox 604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793 in task-service has been cleanup successfully" Feb 13 15:32:28.219840 containerd[1468]: time="2025-02-13T15:32:28.219813916Z" level=info msg="TearDown network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" successfully" Feb 13 15:32:28.219840 containerd[1468]: time="2025-02-13T15:32:28.219837991Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" returns successfully" Feb 13 15:32:28.220260 containerd[1468]: time="2025-02-13T15:32:28.219960272Z" level=info msg="TearDown network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" successfully" Feb 13 15:32:28.220260 containerd[1468]: time="2025-02-13T15:32:28.219972815Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" returns successfully" Feb 13 15:32:28.220624 kubelet[2663]: E0213 15:32:28.220083 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:28.220658 containerd[1468]: time="2025-02-13T15:32:28.220382566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:1,}" Feb 13 15:32:28.220690 containerd[1468]: time="2025-02-13T15:32:28.220669394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:1,}" Feb 13 15:32:28.220845 containerd[1468]: time="2025-02-13T15:32:28.220823394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:32:28.224940 kubelet[2663]: E0213 15:32:28.223314 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:28.225668 containerd[1468]: time="2025-02-13T15:32:28.225627746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:32:28.227035 kubelet[2663]: I0213 15:32:28.227012 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb" Feb 13 15:32:28.228055 containerd[1468]: time="2025-02-13T15:32:28.227844573Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\"" Feb 13 15:32:28.228703 containerd[1468]: time="2025-02-13T15:32:28.228679012Z" level=info msg="Ensure that sandbox 7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb in task-service has been cleanup successfully" Feb 13 15:32:28.228916 containerd[1468]: time="2025-02-13T15:32:28.228863309Z" level=info msg="TearDown network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" successfully" Feb 13 15:32:28.228916 containerd[1468]: time="2025-02-13T15:32:28.228882996Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" returns successfully" Feb 13 15:32:28.229369 containerd[1468]: time="2025-02-13T15:32:28.229343431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:32:28.371162 containerd[1468]: time="2025-02-13T15:32:28.371042640Z" level=error msg="Failed to destroy network for sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.371776 containerd[1468]: time="2025-02-13T15:32:28.371752855Z" level=error msg="encountered an error cleaning up failed sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.371976 containerd[1468]: time="2025-02-13T15:32:28.371887157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.373175 kubelet[2663]: E0213 15:32:28.372954 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.373175 kubelet[2663]: E0213 15:32:28.373038 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:28.373175 kubelet[2663]: E0213 15:32:28.373066 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:28.373322 kubelet[2663]: E0213 15:32:28.373123 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7466fbdd8f-crh5j_calico-system(6a4955fe-2fb1-4e1b-a558-1a75615b1f9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7466fbdd8f-crh5j_calico-system(6a4955fe-2fb1-4e1b-a558-1a75615b1f9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" podUID="6a4955fe-2fb1-4e1b-a558-1a75615b1f9d" Feb 13 15:32:28.376262 containerd[1468]: time="2025-02-13T15:32:28.376203722Z" level=error msg="Failed to destroy network for sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.376921 containerd[1468]: time="2025-02-13T15:32:28.376767181Z" level=error msg="encountered an error cleaning up failed sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.376921 containerd[1468]: time="2025-02-13T15:32:28.376824309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.377037 kubelet[2663]: E0213 15:32:28.376998 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.377086 kubelet[2663]: E0213 15:32:28.377050 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:28.377086 kubelet[2663]: E0213 15:32:28.377068 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:28.377136 kubelet[2663]: E0213 15:32:28.377103 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dczc4_kube-system(f93cafd2-d36c-4948-9ac9-90d542cbe206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dczc4_kube-system(f93cafd2-d36c-4948-9ac9-90d542cbe206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dczc4" podUID="f93cafd2-d36c-4948-9ac9-90d542cbe206" Feb 13 15:32:28.391128 containerd[1468]: time="2025-02-13T15:32:28.391076482Z" level=error msg="Failed to destroy network for sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.391647 containerd[1468]: time="2025-02-13T15:32:28.391077674Z" level=error msg="Failed to destroy network for sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.391820 containerd[1468]: time="2025-02-13T15:32:28.391783252Z" level=error msg="encountered an error cleaning up failed sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.391874 containerd[1468]: time="2025-02-13T15:32:28.391839687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.391953 containerd[1468]: time="2025-02-13T15:32:28.391877468Z" level=error msg="Failed to destroy network for sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.392088 containerd[1468]: time="2025-02-13T15:32:28.392041607Z" level=error msg="encountered an error cleaning up failed sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.392128 kubelet[2663]: E0213 15:32:28.392079 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.392167 kubelet[2663]: E0213 15:32:28.392140 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:28.392167 kubelet[2663]: E0213 15:32:28.392158 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:28.392248 containerd[1468]: time="2025-02-13T15:32:28.392137667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.392349 containerd[1468]: time="2025-02-13T15:32:28.392305683Z" level=error msg="encountered an error cleaning up failed sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.392376 kubelet[2663]: E0213 15:32:28.392227 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5jfrp_kube-system(1d0e938c-1376-4e25-a332-b48365cd1ce4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5jfrp_kube-system(1d0e938c-1376-4e25-a332-b48365cd1ce4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5jfrp" podUID="1d0e938c-1376-4e25-a332-b48365cd1ce4" Feb 13 15:32:28.392422 containerd[1468]: time="2025-02-13T15:32:28.392358102Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.392686 kubelet[2663]: E0213 15:32:28.392606 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.392686 kubelet[2663]: E0213 15:32:28.392646 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.392856 kubelet[2663]: E0213 15:32:28.392687 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:28.392856 kubelet[2663]: E0213 15:32:28.392706 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:28.392856 kubelet[2663]: E0213 15:32:28.392655 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:28.392856 kubelet[2663]: E0213 15:32:28.392768 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:28.392977 kubelet[2663]: E0213 15:32:28.392732 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd68f4995-w55w7_calico-apiserver(bde67ccf-8db2-4a00-9ea9-11acd382b495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd68f4995-w55w7_calico-apiserver(bde67ccf-8db2-4a00-9ea9-11acd382b495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" podUID="bde67ccf-8db2-4a00-9ea9-11acd382b495" Feb 13 15:32:28.392977 kubelet[2663]: E0213 15:32:28.392820 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xbpjc_calico-system(757af110-1c95-44e4-a60e-64cc5c9b9a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xbpjc_calico-system(757af110-1c95-44e4-a60e-64cc5c9b9a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xbpjc" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" Feb 13 15:32:28.397671 containerd[1468]: time="2025-02-13T15:32:28.397636776Z" level=error msg="Failed to destroy network for sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.397982 containerd[1468]: time="2025-02-13T15:32:28.397950105Z" level=error msg="encountered an error cleaning up failed sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.398040 containerd[1468]: time="2025-02-13T15:32:28.397990190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.398225 kubelet[2663]: E0213 15:32:28.398183 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:28.398225 kubelet[2663]: E0213 15:32:28.398229 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:28.398384 kubelet[2663]: E0213 15:32:28.398249 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:28.398384 kubelet[2663]: E0213 15:32:28.398285 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd68f4995-bdz7b_calico-apiserver(69e7b0a9-0728-4031-b89b-c2766dc8da1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd68f4995-bdz7b_calico-apiserver(69e7b0a9-0728-4031-b89b-c2766dc8da1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" podUID="69e7b0a9-0728-4031-b89b-c2766dc8da1b" Feb 13 15:32:29.008377 systemd[1]: Started sshd@10-10.0.0.113:22-10.0.0.1:45304.service - OpenSSH per-connection server daemon (10.0.0.1:45304). Feb 13 15:32:29.051405 sshd[3870]: Accepted publickey for core from 10.0.0.1 port 45304 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:29.053224 sshd-session[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:29.057881 systemd-logind[1451]: New session 11 of user core. Feb 13 15:32:29.069142 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:32:29.183057 sshd[3872]: Connection closed by 10.0.0.1 port 45304 Feb 13 15:32:29.183416 sshd-session[3870]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:29.186947 systemd[1]: sshd@10-10.0.0.113:22-10.0.0.1:45304.service: Deactivated successfully. Feb 13 15:32:29.188704 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:32:29.189317 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:32:29.190210 systemd-logind[1451]: Removed session 11. Feb 13 15:32:29.230266 kubelet[2663]: I0213 15:32:29.230231 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be" Feb 13 15:32:29.231307 containerd[1468]: time="2025-02-13T15:32:29.230820017Z" level=info msg="StopPodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\"" Feb 13 15:32:29.231307 containerd[1468]: time="2025-02-13T15:32:29.231008992Z" level=info msg="Ensure that sandbox 376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be in task-service has been cleanup successfully" Feb 13 15:32:29.232082 containerd[1468]: time="2025-02-13T15:32:29.231629237Z" level=info msg="TearDown network for sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" successfully" Feb 13 15:32:29.232082 containerd[1468]: time="2025-02-13T15:32:29.231645227Z" level=info msg="StopPodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" returns successfully" Feb 13 15:32:29.232082 containerd[1468]: time="2025-02-13T15:32:29.231897511Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\"" Feb 13 15:32:29.232082 containerd[1468]: time="2025-02-13T15:32:29.231983564Z" level=info msg="TearDown network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" successfully" Feb 13 15:32:29.232082 containerd[1468]: time="2025-02-13T15:32:29.231992030Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" returns successfully" Feb 13 15:32:29.234053 kubelet[2663]: I0213 15:32:29.232470 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544" Feb 13 15:32:29.234110 containerd[1468]: time="2025-02-13T15:32:29.233409444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:2,}" Feb 13 15:32:29.234110 containerd[1468]: time="2025-02-13T15:32:29.233968484Z" level=info msg="StopPodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\"" Feb 13 15:32:29.234235 containerd[1468]: time="2025-02-13T15:32:29.234176475Z" level=info msg="Ensure that sandbox 97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544 in task-service has been cleanup successfully" Feb 13 15:32:29.234473 systemd[1]: run-netns-cni\x2d91d516e7\x2de5aa\x2dea37\x2d9ca7\x2d0b8c442b00a8.mount: Deactivated successfully. Feb 13 15:32:29.235052 containerd[1468]: time="2025-02-13T15:32:29.234498191Z" level=info msg="TearDown network for sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" successfully" Feb 13 15:32:29.235052 containerd[1468]: time="2025-02-13T15:32:29.234515974Z" level=info msg="StopPodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" returns successfully" Feb 13 15:32:29.235052 containerd[1468]: time="2025-02-13T15:32:29.234731579Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\"" Feb 13 15:32:29.235052 containerd[1468]: time="2025-02-13T15:32:29.234975578Z" level=info msg="TearDown network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" successfully" Feb 13 15:32:29.235052 containerd[1468]: time="2025-02-13T15:32:29.234989314Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" returns successfully" Feb 13 15:32:29.235242 kubelet[2663]: I0213 15:32:29.235104 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd" Feb 13 15:32:29.236327 containerd[1468]: time="2025-02-13T15:32:29.235671887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:32:29.238140 containerd[1468]: time="2025-02-13T15:32:29.238117964Z" level=info msg="StopPodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\"" Feb 13 15:32:29.238618 containerd[1468]: time="2025-02-13T15:32:29.238598177Z" level=info msg="Ensure that sandbox 6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd in task-service has been cleanup successfully" Feb 13 15:32:29.238860 containerd[1468]: time="2025-02-13T15:32:29.238842526Z" level=info msg="TearDown network for sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" successfully" Feb 13 15:32:29.239020 containerd[1468]: time="2025-02-13T15:32:29.238947333Z" level=info msg="StopPodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" returns successfully" Feb 13 15:32:29.239056 systemd[1]: run-netns-cni\x2d24050844\x2d3813\x2d17be\x2ddecd\x2d3e4dd2977a23.mount: Deactivated successfully. Feb 13 15:32:29.239448 containerd[1468]: time="2025-02-13T15:32:29.239312270Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\"" Feb 13 15:32:29.239448 containerd[1468]: time="2025-02-13T15:32:29.239392029Z" level=info msg="TearDown network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" successfully" Feb 13 15:32:29.239448 containerd[1468]: time="2025-02-13T15:32:29.239402128Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" returns successfully" Feb 13 15:32:29.240279 kubelet[2663]: E0213 15:32:29.239654 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:29.240279 kubelet[2663]: I0213 15:32:29.239794 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674" Feb 13 15:32:29.240441 containerd[1468]: time="2025-02-13T15:32:29.240105150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:2,}" Feb 13 15:32:29.241680 containerd[1468]: time="2025-02-13T15:32:29.241660333Z" level=info msg="StopPodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\"" Feb 13 15:32:29.241929 containerd[1468]: time="2025-02-13T15:32:29.241877431Z" level=info msg="Ensure that sandbox b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674 in task-service has been cleanup successfully" Feb 13 15:32:29.242358 kubelet[2663]: I0213 15:32:29.242105 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493" Feb 13 15:32:29.242450 containerd[1468]: time="2025-02-13T15:32:29.242431633Z" level=info msg="TearDown network for sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" successfully" Feb 13 15:32:29.242542 containerd[1468]: time="2025-02-13T15:32:29.242527823Z" level=info msg="StopPodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" returns successfully" Feb 13 15:32:29.242881 systemd[1]: run-netns-cni\x2ddf2c7674\x2dc9ad\x2d9f25\x2d1198\x2d6b49bfc4d3b9.mount: Deactivated successfully. Feb 13 15:32:29.243492 containerd[1468]: time="2025-02-13T15:32:29.243466798Z" level=info msg="StopPodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\"" Feb 13 15:32:29.244269 containerd[1468]: time="2025-02-13T15:32:29.244054713Z" level=info msg="Ensure that sandbox 0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493 in task-service has been cleanup successfully" Feb 13 15:32:29.244362 containerd[1468]: time="2025-02-13T15:32:29.243774387Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\"" Feb 13 15:32:29.244458 containerd[1468]: time="2025-02-13T15:32:29.244437423Z" level=info msg="TearDown network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" successfully" Feb 13 15:32:29.244458 containerd[1468]: time="2025-02-13T15:32:29.244452802Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" returns successfully" Feb 13 15:32:29.244563 containerd[1468]: time="2025-02-13T15:32:29.244543372Z" level=info msg="TearDown network for sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" successfully" Feb 13 15:32:29.244586 containerd[1468]: time="2025-02-13T15:32:29.244562478Z" level=info msg="StopPodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" returns successfully" Feb 13 15:32:29.244701 kubelet[2663]: I0213 15:32:29.244679 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36" Feb 13 15:32:29.245262 containerd[1468]: time="2025-02-13T15:32:29.245224442Z" level=info msg="StopPodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\"" Feb 13 15:32:29.245507 containerd[1468]: time="2025-02-13T15:32:29.245460185Z" level=info msg="Ensure that sandbox fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36 in task-service has been cleanup successfully" Feb 13 15:32:29.245681 containerd[1468]: time="2025-02-13T15:32:29.245227077Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\"" Feb 13 15:32:29.245787 containerd[1468]: time="2025-02-13T15:32:29.245765749Z" level=info msg="TearDown network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" successfully" Feb 13 15:32:29.245816 containerd[1468]: time="2025-02-13T15:32:29.245785556Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" returns successfully" Feb 13 15:32:29.245850 containerd[1468]: time="2025-02-13T15:32:29.245234150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:32:29.246187 containerd[1468]: time="2025-02-13T15:32:29.246150443Z" level=info msg="TearDown network for sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" successfully" Feb 13 15:32:29.246187 containerd[1468]: time="2025-02-13T15:32:29.246182523Z" level=info msg="StopPodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" returns successfully" Feb 13 15:32:29.246398 kubelet[2663]: E0213 15:32:29.246309 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:29.246442 systemd[1]: run-netns-cni\x2deb096ca3\x2dde88\x2d5c25\x2dec85\x2d90f937ab0814.mount: Deactivated successfully. Feb 13 15:32:29.246559 containerd[1468]: time="2025-02-13T15:32:29.246536459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:2,}" Feb 13 15:32:29.246569 systemd[1]: run-netns-cni\x2dfeeee927\x2dd6d2\x2d49d2\x2d9927\x2d7424cdfc9088.mount: Deactivated successfully. Feb 13 15:32:29.246758 containerd[1468]: time="2025-02-13T15:32:29.246728539Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\"" Feb 13 15:32:29.247560 containerd[1468]: time="2025-02-13T15:32:29.247540465Z" level=info msg="TearDown network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" successfully" Feb 13 15:32:29.247738 containerd[1468]: time="2025-02-13T15:32:29.247673336Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" returns successfully" Feb 13 15:32:29.248541 containerd[1468]: time="2025-02-13T15:32:29.248501361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:2,}" Feb 13 15:32:29.356068 containerd[1468]: time="2025-02-13T15:32:29.355940652Z" level=error msg="Failed to destroy network for sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.356479 containerd[1468]: time="2025-02-13T15:32:29.356330685Z" level=error msg="encountered an error cleaning up failed sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.356479 containerd[1468]: time="2025-02-13T15:32:29.356390257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.357027 kubelet[2663]: E0213 15:32:29.356835 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.357027 kubelet[2663]: E0213 15:32:29.356940 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:29.357027 kubelet[2663]: E0213 15:32:29.356968 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:29.358096 kubelet[2663]: E0213 15:32:29.357041 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7466fbdd8f-crh5j_calico-system(6a4955fe-2fb1-4e1b-a558-1a75615b1f9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7466fbdd8f-crh5j_calico-system(6a4955fe-2fb1-4e1b-a558-1a75615b1f9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" podUID="6a4955fe-2fb1-4e1b-a558-1a75615b1f9d" Feb 13 15:32:29.371770 containerd[1468]: time="2025-02-13T15:32:29.371736694Z" level=error msg="Failed to destroy network for sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.372674 containerd[1468]: time="2025-02-13T15:32:29.372648907Z" level=error msg="encountered an error cleaning up failed sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.373075 containerd[1468]: time="2025-02-13T15:32:29.373051024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.374170 kubelet[2663]: E0213 15:32:29.374123 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.374314 kubelet[2663]: E0213 15:32:29.374297 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:29.374719 kubelet[2663]: E0213 15:32:29.374401 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:29.374719 kubelet[2663]: E0213 15:32:29.374458 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd68f4995-bdz7b_calico-apiserver(69e7b0a9-0728-4031-b89b-c2766dc8da1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd68f4995-bdz7b_calico-apiserver(69e7b0a9-0728-4031-b89b-c2766dc8da1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" podUID="69e7b0a9-0728-4031-b89b-c2766dc8da1b" Feb 13 15:32:29.383128 containerd[1468]: time="2025-02-13T15:32:29.383075306Z" level=error msg="Failed to destroy network for sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.385447 containerd[1468]: time="2025-02-13T15:32:29.385328361Z" level=error msg="encountered an error cleaning up failed sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.385447 containerd[1468]: time="2025-02-13T15:32:29.385398123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.385757 kubelet[2663]: E0213 15:32:29.385711 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.385757 kubelet[2663]: E0213 15:32:29.385763 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:29.385937 kubelet[2663]: E0213 15:32:29.385780 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:29.385937 kubelet[2663]: E0213 15:32:29.385825 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd68f4995-w55w7_calico-apiserver(bde67ccf-8db2-4a00-9ea9-11acd382b495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd68f4995-w55w7_calico-apiserver(bde67ccf-8db2-4a00-9ea9-11acd382b495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" podUID="bde67ccf-8db2-4a00-9ea9-11acd382b495" Feb 13 15:32:29.391196 containerd[1468]: time="2025-02-13T15:32:29.391128784Z" level=error msg="Failed to destroy network for sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.391839 containerd[1468]: time="2025-02-13T15:32:29.391806488Z" level=error msg="encountered an error cleaning up failed sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.391940 containerd[1468]: time="2025-02-13T15:32:29.391877731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.394084 kubelet[2663]: E0213 15:32:29.394031 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.394152 kubelet[2663]: E0213 15:32:29.394093 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:29.394152 kubelet[2663]: E0213 15:32:29.394115 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:29.394245 kubelet[2663]: E0213 15:32:29.394146 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xbpjc_calico-system(757af110-1c95-44e4-a60e-64cc5c9b9a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xbpjc_calico-system(757af110-1c95-44e4-a60e-64cc5c9b9a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xbpjc" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" Feb 13 15:32:29.405457 containerd[1468]: time="2025-02-13T15:32:29.403892595Z" level=error msg="Failed to destroy network for sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.405457 containerd[1468]: time="2025-02-13T15:32:29.404370283Z" level=error msg="encountered an error cleaning up failed sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.405457 containerd[1468]: time="2025-02-13T15:32:29.404444213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.407417 kubelet[2663]: E0213 15:32:29.405121 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.407417 kubelet[2663]: E0213 15:32:29.405313 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:29.407417 kubelet[2663]: E0213 15:32:29.405375 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:29.407539 kubelet[2663]: E0213 15:32:29.405466 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dczc4_kube-system(f93cafd2-d36c-4948-9ac9-90d542cbe206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dczc4_kube-system(f93cafd2-d36c-4948-9ac9-90d542cbe206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dczc4" podUID="f93cafd2-d36c-4948-9ac9-90d542cbe206" Feb 13 15:32:29.415626 containerd[1468]: time="2025-02-13T15:32:29.415571979Z" level=error msg="Failed to destroy network for sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.416097 containerd[1468]: time="2025-02-13T15:32:29.416064214Z" level=error msg="encountered an error cleaning up failed sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.416172 containerd[1468]: time="2025-02-13T15:32:29.416139385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.416422 kubelet[2663]: E0213 15:32:29.416387 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:29.416473 kubelet[2663]: E0213 15:32:29.416451 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:29.416502 kubelet[2663]: E0213 15:32:29.416477 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:29.416558 kubelet[2663]: E0213 15:32:29.416530 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5jfrp_kube-system(1d0e938c-1376-4e25-a332-b48365cd1ce4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5jfrp_kube-system(1d0e938c-1376-4e25-a332-b48365cd1ce4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5jfrp" podUID="1d0e938c-1376-4e25-a332-b48365cd1ce4" Feb 13 15:32:29.472709 systemd[1]: run-netns-cni\x2df83bb3c2\x2d4d04\x2d385a\x2d9122\x2d0884fee85999.mount: Deactivated successfully. Feb 13 15:32:30.260107 kubelet[2663]: I0213 15:32:30.260064 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7" Feb 13 15:32:30.260860 containerd[1468]: time="2025-02-13T15:32:30.260731799Z" level=info msg="StopPodSandbox for \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\"" Feb 13 15:32:30.261203 containerd[1468]: time="2025-02-13T15:32:30.260975166Z" level=info msg="Ensure that sandbox 7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7 in task-service has been cleanup successfully" Feb 13 15:32:30.264015 kubelet[2663]: I0213 15:32:30.261921 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651" Feb 13 15:32:30.264108 containerd[1468]: time="2025-02-13T15:32:30.262414692Z" level=info msg="StopPodSandbox for \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\"" Feb 13 15:32:30.264108 containerd[1468]: time="2025-02-13T15:32:30.262548213Z" level=info msg="Ensure that sandbox 4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651 in task-service has been cleanup successfully" Feb 13 15:32:30.264108 containerd[1468]: time="2025-02-13T15:32:30.263314914Z" level=info msg="TearDown network for sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\" successfully" Feb 13 15:32:30.264108 containerd[1468]: time="2025-02-13T15:32:30.263329622Z" level=info msg="StopPodSandbox for \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\" returns successfully" Feb 13 15:32:30.264108 containerd[1468]: time="2025-02-13T15:32:30.263684969Z" level=info msg="StopPodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\"" Feb 13 15:32:30.264108 containerd[1468]: time="2025-02-13T15:32:30.263751945Z" level=info msg="TearDown network for sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" successfully" Feb 13 15:32:30.264108 containerd[1468]: time="2025-02-13T15:32:30.263760861Z" level=info msg="StopPodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" returns successfully" Feb 13 15:32:30.264108 containerd[1468]: time="2025-02-13T15:32:30.264040147Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\"" Feb 13 15:32:30.264551 containerd[1468]: time="2025-02-13T15:32:30.264336233Z" level=info msg="TearDown network for sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\" successfully" Feb 13 15:32:30.264551 containerd[1468]: time="2025-02-13T15:32:30.264349709Z" level=info msg="StopPodSandbox for \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\" returns successfully" Feb 13 15:32:30.264551 containerd[1468]: time="2025-02-13T15:32:30.264539005Z" level=info msg="TearDown network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" successfully" Feb 13 15:32:30.264551 containerd[1468]: time="2025-02-13T15:32:30.264550757Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" returns successfully" Feb 13 15:32:30.264671 containerd[1468]: time="2025-02-13T15:32:30.264616941Z" level=info msg="StopPodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\"" Feb 13 15:32:30.264695 containerd[1468]: time="2025-02-13T15:32:30.264680911Z" level=info msg="TearDown network for sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" successfully" Feb 13 15:32:30.264695 containerd[1468]: time="2025-02-13T15:32:30.264689327Z" level=info msg="StopPodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" returns successfully" Feb 13 15:32:30.265410 kubelet[2663]: I0213 15:32:30.264886 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f" Feb 13 15:32:30.265573 containerd[1468]: time="2025-02-13T15:32:30.265533343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:3,}" Feb 13 15:32:30.265665 systemd[1]: run-netns-cni\x2d74430fb3\x2d646b\x2d91a5\x2d344e\x2da1dba20aaafc.mount: Deactivated successfully. Feb 13 15:32:30.266170 containerd[1468]: time="2025-02-13T15:32:30.265648760Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\"" Feb 13 15:32:30.266170 containerd[1468]: time="2025-02-13T15:32:30.265757514Z" level=info msg="TearDown network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" successfully" Feb 13 15:32:30.266170 containerd[1468]: time="2025-02-13T15:32:30.265769216Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" returns successfully" Feb 13 15:32:30.266170 containerd[1468]: time="2025-02-13T15:32:30.265778454Z" level=info msg="StopPodSandbox for \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\"" Feb 13 15:32:30.266170 containerd[1468]: time="2025-02-13T15:32:30.265992495Z" level=info msg="Ensure that sandbox 317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f in task-service has been cleanup successfully" Feb 13 15:32:30.268361 containerd[1468]: time="2025-02-13T15:32:30.266694906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:3,}" Feb 13 15:32:30.268361 containerd[1468]: time="2025-02-13T15:32:30.267936589Z" level=info msg="TearDown network for sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\" successfully" Feb 13 15:32:30.268361 containerd[1468]: time="2025-02-13T15:32:30.267953070Z" level=info msg="StopPodSandbox for \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\" returns successfully" Feb 13 15:32:30.268361 containerd[1468]: time="2025-02-13T15:32:30.268259076Z" level=info msg="StopPodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\"" Feb 13 15:32:30.268361 containerd[1468]: time="2025-02-13T15:32:30.268334427Z" level=info msg="TearDown network for sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" successfully" Feb 13 15:32:30.268361 containerd[1468]: time="2025-02-13T15:32:30.268343184Z" level=info msg="StopPodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" returns successfully" Feb 13 15:32:30.268521 kubelet[2663]: I0213 15:32:30.268242 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945" Feb 13 15:32:30.268562 containerd[1468]: time="2025-02-13T15:32:30.268521549Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\"" Feb 13 15:32:30.268605 containerd[1468]: time="2025-02-13T15:32:30.268585609Z" level=info msg="TearDown network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" successfully" Feb 13 15:32:30.268605 containerd[1468]: time="2025-02-13T15:32:30.268597522Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" returns successfully" Feb 13 15:32:30.268742 containerd[1468]: time="2025-02-13T15:32:30.268725121Z" level=info msg="StopPodSandbox for \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\"" Feb 13 15:32:30.268877 containerd[1468]: time="2025-02-13T15:32:30.268859885Z" level=info msg="Ensure that sandbox 6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945 in task-service has been cleanup successfully" Feb 13 15:32:30.269110 containerd[1468]: time="2025-02-13T15:32:30.269037930Z" level=info msg="TearDown network for sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\" successfully" Feb 13 15:32:30.269110 containerd[1468]: time="2025-02-13T15:32:30.269053950Z" level=info msg="StopPodSandbox for \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\" returns successfully" Feb 13 15:32:30.269645 systemd[1]: run-netns-cni\x2d632d8fb8\x2ddd32\x2d9868\x2d77e0\x2dbc4b982ff359.mount: Deactivated successfully. Feb 13 15:32:30.269735 systemd[1]: run-netns-cni\x2d84659692\x2dffa3\x2d697a\x2d484f\x2d9520da4223f1.mount: Deactivated successfully. Feb 13 15:32:30.271027 containerd[1468]: time="2025-02-13T15:32:30.270580829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:32:30.271027 containerd[1468]: time="2025-02-13T15:32:30.270743946Z" level=info msg="StopPodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\"" Feb 13 15:32:30.271027 containerd[1468]: time="2025-02-13T15:32:30.270815671Z" level=info msg="TearDown network for sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" successfully" Feb 13 15:32:30.271027 containerd[1468]: time="2025-02-13T15:32:30.270824888Z" level=info msg="StopPodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" returns successfully" Feb 13 15:32:30.271428 containerd[1468]: time="2025-02-13T15:32:30.271385161Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\"" Feb 13 15:32:30.271486 containerd[1468]: time="2025-02-13T15:32:30.271460162Z" level=info msg="TearDown network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" successfully" Feb 13 15:32:30.271486 containerd[1468]: time="2025-02-13T15:32:30.271471393Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" returns successfully" Feb 13 15:32:30.272385 kubelet[2663]: E0213 15:32:30.272358 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:30.272516 kubelet[2663]: I0213 15:32:30.272488 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c" Feb 13 15:32:30.272986 containerd[1468]: time="2025-02-13T15:32:30.272929944Z" level=info msg="StopPodSandbox for \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\"" Feb 13 15:32:30.273260 containerd[1468]: time="2025-02-13T15:32:30.273195444Z" level=info msg="Ensure that sandbox 866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c in task-service has been cleanup successfully" Feb 13 15:32:30.273779 containerd[1468]: time="2025-02-13T15:32:30.273497250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:3,}" Feb 13 15:32:30.273597 systemd[1]: run-netns-cni\x2d08349655\x2d8074\x2dbc06\x2d3429\x2da4da356bdee7.mount: Deactivated successfully. Feb 13 15:32:30.274114 containerd[1468]: time="2025-02-13T15:32:30.274040431Z" level=info msg="TearDown network for sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\" successfully" Feb 13 15:32:30.274114 containerd[1468]: time="2025-02-13T15:32:30.274057122Z" level=info msg="StopPodSandbox for \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\" returns successfully" Feb 13 15:32:30.274472 containerd[1468]: time="2025-02-13T15:32:30.274443349Z" level=info msg="StopPodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\"" Feb 13 15:32:30.274669 containerd[1468]: time="2025-02-13T15:32:30.274515815Z" level=info msg="TearDown network for sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" successfully" Feb 13 15:32:30.274669 containerd[1468]: time="2025-02-13T15:32:30.274531123Z" level=info msg="StopPodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" returns successfully" Feb 13 15:32:30.275042 containerd[1468]: time="2025-02-13T15:32:30.274829263Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\"" Feb 13 15:32:30.275042 containerd[1468]: time="2025-02-13T15:32:30.274956373Z" level=info msg="TearDown network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" successfully" Feb 13 15:32:30.275042 containerd[1468]: time="2025-02-13T15:32:30.275002880Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" returns successfully" Feb 13 15:32:30.275316 containerd[1468]: time="2025-02-13T15:32:30.275296522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:32:30.275430 kubelet[2663]: I0213 15:32:30.275356 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b" Feb 13 15:32:30.275734 containerd[1468]: time="2025-02-13T15:32:30.275702695Z" level=info msg="StopPodSandbox for \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\"" Feb 13 15:32:30.275940 containerd[1468]: time="2025-02-13T15:32:30.275919383Z" level=info msg="Ensure that sandbox 6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b in task-service has been cleanup successfully" Feb 13 15:32:30.276022 systemd[1]: run-netns-cni\x2d896e9f6c\x2d9bda\x2dd3c5\x2d752d\x2de6331b70d150.mount: Deactivated successfully. Feb 13 15:32:30.276123 containerd[1468]: time="2025-02-13T15:32:30.276104039Z" level=info msg="TearDown network for sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\" successfully" Feb 13 15:32:30.276164 containerd[1468]: time="2025-02-13T15:32:30.276122033Z" level=info msg="StopPodSandbox for \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\" returns successfully" Feb 13 15:32:30.276402 containerd[1468]: time="2025-02-13T15:32:30.276379286Z" level=info msg="StopPodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\"" Feb 13 15:32:30.276486 containerd[1468]: time="2025-02-13T15:32:30.276467142Z" level=info msg="TearDown network for sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" successfully" Feb 13 15:32:30.276514 containerd[1468]: time="2025-02-13T15:32:30.276484735Z" level=info msg="StopPodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" returns successfully" Feb 13 15:32:30.276810 containerd[1468]: time="2025-02-13T15:32:30.276763829Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\"" Feb 13 15:32:30.276957 containerd[1468]: time="2025-02-13T15:32:30.276936324Z" level=info msg="TearDown network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" successfully" Feb 13 15:32:30.276957 containerd[1468]: time="2025-02-13T15:32:30.276951793Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" returns successfully" Feb 13 15:32:30.277170 kubelet[2663]: E0213 15:32:30.277151 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:30.277421 containerd[1468]: time="2025-02-13T15:32:30.277398853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:3,}" Feb 13 15:32:30.470872 systemd[1]: run-netns-cni\x2d5ea5be56\x2d17f8\x2d4f01\x2d22b8\x2d643bb5f49127.mount: Deactivated successfully. Feb 13 15:32:30.657201 containerd[1468]: time="2025-02-13T15:32:30.657045751Z" level=error msg="Failed to destroy network for sandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.658048 containerd[1468]: time="2025-02-13T15:32:30.657954669Z" level=error msg="encountered an error cleaning up failed sandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.658048 containerd[1468]: time="2025-02-13T15:32:30.658010675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.658824 kubelet[2663]: E0213 15:32:30.658473 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.658824 kubelet[2663]: E0213 15:32:30.658540 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:30.658824 kubelet[2663]: E0213 15:32:30.658561 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:30.658954 kubelet[2663]: E0213 15:32:30.658601 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7466fbdd8f-crh5j_calico-system(6a4955fe-2fb1-4e1b-a558-1a75615b1f9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7466fbdd8f-crh5j_calico-system(6a4955fe-2fb1-4e1b-a558-1a75615b1f9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" podUID="6a4955fe-2fb1-4e1b-a558-1a75615b1f9d" Feb 13 15:32:30.664458 containerd[1468]: time="2025-02-13T15:32:30.664434127Z" level=error msg="Failed to destroy network for sandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.667389 containerd[1468]: time="2025-02-13T15:32:30.667367620Z" level=error msg="encountered an error cleaning up failed sandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.668082 containerd[1468]: time="2025-02-13T15:32:30.668050493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.669013 kubelet[2663]: E0213 15:32:30.668759 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.669013 kubelet[2663]: E0213 15:32:30.668796 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:30.669013 kubelet[2663]: E0213 15:32:30.668817 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:30.669150 kubelet[2663]: E0213 15:32:30.668849 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd68f4995-bdz7b_calico-apiserver(69e7b0a9-0728-4031-b89b-c2766dc8da1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd68f4995-bdz7b_calico-apiserver(69e7b0a9-0728-4031-b89b-c2766dc8da1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" podUID="69e7b0a9-0728-4031-b89b-c2766dc8da1b" Feb 13 15:32:30.681281 containerd[1468]: time="2025-02-13T15:32:30.681115828Z" level=error msg="Failed to destroy network for sandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.681281 containerd[1468]: time="2025-02-13T15:32:30.681549483Z" level=error msg="encountered an error cleaning up failed sandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.681281 containerd[1468]: time="2025-02-13T15:32:30.681615657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.682183 kubelet[2663]: E0213 15:32:30.681794 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.682183 kubelet[2663]: E0213 15:32:30.681831 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:30.682183 kubelet[2663]: E0213 15:32:30.681856 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:30.682283 kubelet[2663]: E0213 15:32:30.681890 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd68f4995-w55w7_calico-apiserver(bde67ccf-8db2-4a00-9ea9-11acd382b495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd68f4995-w55w7_calico-apiserver(bde67ccf-8db2-4a00-9ea9-11acd382b495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" podUID="bde67ccf-8db2-4a00-9ea9-11acd382b495" Feb 13 15:32:30.684189 containerd[1468]: time="2025-02-13T15:32:30.683763795Z" level=error msg="Failed to destroy network for sandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.684346 containerd[1468]: time="2025-02-13T15:32:30.684160711Z" level=error msg="encountered an error cleaning up failed sandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.684546 containerd[1468]: time="2025-02-13T15:32:30.684497504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.684860 kubelet[2663]: E0213 15:32:30.684823 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.684949 kubelet[2663]: E0213 15:32:30.684867 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:30.684949 kubelet[2663]: E0213 15:32:30.684882 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:30.685000 kubelet[2663]: E0213 15:32:30.684958 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dczc4_kube-system(f93cafd2-d36c-4948-9ac9-90d542cbe206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dczc4_kube-system(f93cafd2-d36c-4948-9ac9-90d542cbe206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dczc4" podUID="f93cafd2-d36c-4948-9ac9-90d542cbe206" Feb 13 15:32:30.691321 containerd[1468]: time="2025-02-13T15:32:30.691297574Z" level=error msg="Failed to destroy network for sandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.691717 containerd[1468]: time="2025-02-13T15:32:30.691679371Z" level=error msg="encountered an error cleaning up failed sandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.691857 containerd[1468]: time="2025-02-13T15:32:30.691722532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.691922 kubelet[2663]: E0213 15:32:30.691848 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.691922 kubelet[2663]: E0213 15:32:30.691876 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:30.691922 kubelet[2663]: E0213 15:32:30.691891 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:30.692068 kubelet[2663]: E0213 15:32:30.691927 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5jfrp_kube-system(1d0e938c-1376-4e25-a332-b48365cd1ce4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5jfrp_kube-system(1d0e938c-1376-4e25-a332-b48365cd1ce4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5jfrp" podUID="1d0e938c-1376-4e25-a332-b48365cd1ce4" Feb 13 15:32:30.693385 containerd[1468]: time="2025-02-13T15:32:30.693339892Z" level=error msg="Failed to destroy network for sandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.693738 containerd[1468]: time="2025-02-13T15:32:30.693716349Z" level=error msg="encountered an error cleaning up failed sandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.693788 containerd[1468]: time="2025-02-13T15:32:30.693769269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.694036 kubelet[2663]: E0213 15:32:30.693994 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:30.694089 kubelet[2663]: E0213 15:32:30.694060 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:30.694089 kubelet[2663]: E0213 15:32:30.694079 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:30.694159 kubelet[2663]: E0213 15:32:30.694124 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xbpjc_calico-system(757af110-1c95-44e4-a60e-64cc5c9b9a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xbpjc_calico-system(757af110-1c95-44e4-a60e-64cc5c9b9a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xbpjc" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" Feb 13 15:32:31.280722 kubelet[2663]: I0213 15:32:31.280690 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505" Feb 13 15:32:31.281459 containerd[1468]: time="2025-02-13T15:32:31.281418585Z" level=info msg="StopPodSandbox for \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\"" Feb 13 15:32:31.281691 containerd[1468]: time="2025-02-13T15:32:31.281608122Z" level=info msg="Ensure that sandbox db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505 in task-service has been cleanup successfully" Feb 13 15:32:31.282051 containerd[1468]: time="2025-02-13T15:32:31.281980481Z" level=info msg="TearDown network for sandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\" successfully" Feb 13 15:32:31.282051 containerd[1468]: time="2025-02-13T15:32:31.281998085Z" level=info msg="StopPodSandbox for \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\" returns successfully" Feb 13 15:32:31.282548 containerd[1468]: time="2025-02-13T15:32:31.282514745Z" level=info msg="StopPodSandbox for \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\"" Feb 13 15:32:31.282835 containerd[1468]: time="2025-02-13T15:32:31.282601569Z" level=info msg="TearDown network for sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\" successfully" Feb 13 15:32:31.282835 containerd[1468]: time="2025-02-13T15:32:31.282611888Z" level=info msg="StopPodSandbox for \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\" returns successfully" Feb 13 15:32:31.283074 containerd[1468]: time="2025-02-13T15:32:31.283049521Z" level=info msg="StopPodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\"" Feb 13 15:32:31.283171 containerd[1468]: time="2025-02-13T15:32:31.283130413Z" level=info msg="TearDown network for sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" successfully" Feb 13 15:32:31.283171 containerd[1468]: time="2025-02-13T15:32:31.283168524Z" level=info msg="StopPodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" returns successfully" Feb 13 15:32:31.283663 kubelet[2663]: I0213 15:32:31.283638 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e" Feb 13 15:32:31.283889 containerd[1468]: time="2025-02-13T15:32:31.283783559Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\"" Feb 13 15:32:31.283889 containerd[1468]: time="2025-02-13T15:32:31.283853912Z" level=info msg="TearDown network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" successfully" Feb 13 15:32:31.283889 containerd[1468]: time="2025-02-13T15:32:31.283862258Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" returns successfully" Feb 13 15:32:31.284254 containerd[1468]: time="2025-02-13T15:32:31.284043668Z" level=info msg="StopPodSandbox for \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\"" Feb 13 15:32:31.284254 containerd[1468]: time="2025-02-13T15:32:31.284172701Z" level=info msg="Ensure that sandbox dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e in task-service has been cleanup successfully" Feb 13 15:32:31.284391 containerd[1468]: time="2025-02-13T15:32:31.284372868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:4,}" Feb 13 15:32:31.284733 containerd[1468]: time="2025-02-13T15:32:31.284694792Z" level=info msg="TearDown network for sandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\" successfully" Feb 13 15:32:31.284733 containerd[1468]: time="2025-02-13T15:32:31.284713427Z" level=info msg="StopPodSandbox for \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\" returns successfully" Feb 13 15:32:31.285086 containerd[1468]: time="2025-02-13T15:32:31.285066220Z" level=info msg="StopPodSandbox for \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\"" Feb 13 15:32:31.285154 containerd[1468]: time="2025-02-13T15:32:31.285141472Z" level=info msg="TearDown network for sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\" successfully" Feb 13 15:32:31.285154 containerd[1468]: time="2025-02-13T15:32:31.285151621Z" level=info msg="StopPodSandbox for \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\" returns successfully" Feb 13 15:32:31.285436 containerd[1468]: time="2025-02-13T15:32:31.285409776Z" level=info msg="StopPodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\"" Feb 13 15:32:31.285996 containerd[1468]: time="2025-02-13T15:32:31.285501478Z" level=info msg="TearDown network for sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" successfully" Feb 13 15:32:31.285996 containerd[1468]: time="2025-02-13T15:32:31.285532516Z" level=info msg="StopPodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" returns successfully" Feb 13 15:32:31.286065 kubelet[2663]: I0213 15:32:31.285632 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3" Feb 13 15:32:31.286096 containerd[1468]: time="2025-02-13T15:32:31.286026715Z" level=info msg="StopPodSandbox for \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\"" Feb 13 15:32:31.286947 containerd[1468]: time="2025-02-13T15:32:31.286312282Z" level=info msg="Ensure that sandbox d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3 in task-service has been cleanup successfully" Feb 13 15:32:31.286947 containerd[1468]: time="2025-02-13T15:32:31.286456172Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\"" Feb 13 15:32:31.286947 containerd[1468]: time="2025-02-13T15:32:31.286523038Z" level=info msg="TearDown network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" successfully" Feb 13 15:32:31.286947 containerd[1468]: time="2025-02-13T15:32:31.286531985Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" returns successfully" Feb 13 15:32:31.286947 containerd[1468]: time="2025-02-13T15:32:31.286788246Z" level=info msg="TearDown network for sandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\" successfully" Feb 13 15:32:31.286947 containerd[1468]: time="2025-02-13T15:32:31.286802112Z" level=info msg="StopPodSandbox for \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\" returns successfully" Feb 13 15:32:31.287114 containerd[1468]: time="2025-02-13T15:32:31.286985437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:32:31.287565 containerd[1468]: time="2025-02-13T15:32:31.287417929Z" level=info msg="StopPodSandbox for \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\"" Feb 13 15:32:31.287565 containerd[1468]: time="2025-02-13T15:32:31.287494503Z" level=info msg="TearDown network for sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\" successfully" Feb 13 15:32:31.287565 containerd[1468]: time="2025-02-13T15:32:31.287503690Z" level=info msg="StopPodSandbox for \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\" returns successfully" Feb 13 15:32:31.288035 containerd[1468]: time="2025-02-13T15:32:31.288004281Z" level=info msg="StopPodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\"" Feb 13 15:32:31.288202 containerd[1468]: time="2025-02-13T15:32:31.288084582Z" level=info msg="TearDown network for sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" successfully" Feb 13 15:32:31.288202 containerd[1468]: time="2025-02-13T15:32:31.288095914Z" level=info msg="StopPodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" returns successfully" Feb 13 15:32:31.288434 containerd[1468]: time="2025-02-13T15:32:31.288414472Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\"" Feb 13 15:32:31.288517 containerd[1468]: time="2025-02-13T15:32:31.288496456Z" level=info msg="TearDown network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" successfully" Feb 13 15:32:31.288517 containerd[1468]: time="2025-02-13T15:32:31.288512536Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" returns successfully" Feb 13 15:32:31.288590 kubelet[2663]: I0213 15:32:31.288562 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39" Feb 13 15:32:31.288753 kubelet[2663]: E0213 15:32:31.288734 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:31.289027 containerd[1468]: time="2025-02-13T15:32:31.289008328Z" level=info msg="StopPodSandbox for \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\"" Feb 13 15:32:31.289160 containerd[1468]: time="2025-02-13T15:32:31.289144885Z" level=info msg="Ensure that sandbox 0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39 in task-service has been cleanup successfully" Feb 13 15:32:31.289301 containerd[1468]: time="2025-02-13T15:32:31.289285008Z" level=info msg="TearDown network for sandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\" successfully" Feb 13 15:32:31.289331 containerd[1468]: time="2025-02-13T15:32:31.289298823Z" level=info msg="StopPodSandbox for \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\" returns successfully" Feb 13 15:32:31.289422 containerd[1468]: time="2025-02-13T15:32:31.289406476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:4,}" Feb 13 15:32:31.289923 containerd[1468]: time="2025-02-13T15:32:31.289765370Z" level=info msg="StopPodSandbox for \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\"" Feb 13 15:32:31.289923 containerd[1468]: time="2025-02-13T15:32:31.289839149Z" level=info msg="TearDown network for sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\" successfully" Feb 13 15:32:31.289923 containerd[1468]: time="2025-02-13T15:32:31.289847915Z" level=info msg="StopPodSandbox for \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\" returns successfully" Feb 13 15:32:31.292916 containerd[1468]: time="2025-02-13T15:32:31.291668547Z" level=info msg="StopPodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\"" Feb 13 15:32:31.292916 containerd[1468]: time="2025-02-13T15:32:31.291739450Z" level=info msg="TearDown network for sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" successfully" Feb 13 15:32:31.292916 containerd[1468]: time="2025-02-13T15:32:31.291748277Z" level=info msg="StopPodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" returns successfully" Feb 13 15:32:31.293349 containerd[1468]: time="2025-02-13T15:32:31.293324930Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\"" Feb 13 15:32:31.293451 containerd[1468]: time="2025-02-13T15:32:31.293402545Z" level=info msg="TearDown network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" successfully" Feb 13 15:32:31.293451 containerd[1468]: time="2025-02-13T15:32:31.293439925Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" returns successfully" Feb 13 15:32:31.293658 kubelet[2663]: E0213 15:32:31.293637 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:31.294302 containerd[1468]: time="2025-02-13T15:32:31.294283100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:4,}" Feb 13 15:32:31.294855 kubelet[2663]: I0213 15:32:31.294837 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c" Feb 13 15:32:31.295237 containerd[1468]: time="2025-02-13T15:32:31.295219630Z" level=info msg="StopPodSandbox for \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\"" Feb 13 15:32:31.295469 containerd[1468]: time="2025-02-13T15:32:31.295453520Z" level=info msg="Ensure that sandbox d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c in task-service has been cleanup successfully" Feb 13 15:32:31.295661 containerd[1468]: time="2025-02-13T15:32:31.295638857Z" level=info msg="TearDown network for sandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\" successfully" Feb 13 15:32:31.295661 containerd[1468]: time="2025-02-13T15:32:31.295654126Z" level=info msg="StopPodSandbox for \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\" returns successfully" Feb 13 15:32:31.295926 containerd[1468]: time="2025-02-13T15:32:31.295856987Z" level=info msg="StopPodSandbox for \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\"" Feb 13 15:32:31.295978 containerd[1468]: time="2025-02-13T15:32:31.295941125Z" level=info msg="TearDown network for sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\" successfully" Feb 13 15:32:31.295978 containerd[1468]: time="2025-02-13T15:32:31.295950694Z" level=info msg="StopPodSandbox for \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\" returns successfully" Feb 13 15:32:31.296157 containerd[1468]: time="2025-02-13T15:32:31.296139789Z" level=info msg="StopPodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\"" Feb 13 15:32:31.296220 containerd[1468]: time="2025-02-13T15:32:31.296205752Z" level=info msg="TearDown network for sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" successfully" Feb 13 15:32:31.296220 containerd[1468]: time="2025-02-13T15:32:31.296217295Z" level=info msg="StopPodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" returns successfully" Feb 13 15:32:31.296700 containerd[1468]: time="2025-02-13T15:32:31.296673391Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\"" Feb 13 15:32:31.296783 containerd[1468]: time="2025-02-13T15:32:31.296766627Z" level=info msg="TearDown network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" successfully" Feb 13 15:32:31.296783 containerd[1468]: time="2025-02-13T15:32:31.296780674Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" returns successfully" Feb 13 15:32:31.297582 kubelet[2663]: I0213 15:32:31.297190 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468" Feb 13 15:32:31.297637 containerd[1468]: time="2025-02-13T15:32:31.297194871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:32:31.298035 containerd[1468]: time="2025-02-13T15:32:31.298006817Z" level=info msg="StopPodSandbox for \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\"" Feb 13 15:32:31.298199 containerd[1468]: time="2025-02-13T15:32:31.298177017Z" level=info msg="Ensure that sandbox 1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468 in task-service has been cleanup successfully" Feb 13 15:32:31.298366 containerd[1468]: time="2025-02-13T15:32:31.298345444Z" level=info msg="TearDown network for sandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\" successfully" Feb 13 15:32:31.298366 containerd[1468]: time="2025-02-13T15:32:31.298364309Z" level=info msg="StopPodSandbox for \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\" returns successfully" Feb 13 15:32:31.456636 containerd[1468]: time="2025-02-13T15:32:31.456584498Z" level=info msg="StopPodSandbox for \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\"" Feb 13 15:32:31.456809 containerd[1468]: time="2025-02-13T15:32:31.456737295Z" level=info msg="TearDown network for sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\" successfully" Feb 13 15:32:31.456809 containerd[1468]: time="2025-02-13T15:32:31.456750450Z" level=info msg="StopPodSandbox for \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\" returns successfully" Feb 13 15:32:31.457466 containerd[1468]: time="2025-02-13T15:32:31.457441960Z" level=info msg="StopPodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\"" Feb 13 15:32:31.457723 containerd[1468]: time="2025-02-13T15:32:31.457687782Z" level=info msg="TearDown network for sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" successfully" Feb 13 15:32:31.457723 containerd[1468]: time="2025-02-13T15:32:31.457706266Z" level=info msg="StopPodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" returns successfully" Feb 13 15:32:31.458052 containerd[1468]: time="2025-02-13T15:32:31.458013493Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\"" Feb 13 15:32:31.458191 containerd[1468]: time="2025-02-13T15:32:31.458135262Z" level=info msg="TearDown network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" successfully" Feb 13 15:32:31.458191 containerd[1468]: time="2025-02-13T15:32:31.458145511Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" returns successfully" Feb 13 15:32:31.458692 containerd[1468]: time="2025-02-13T15:32:31.458649168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:4,}" Feb 13 15:32:31.472727 systemd[1]: run-netns-cni\x2d83f0bd41\x2d6b2a\x2dc317\x2d921d\x2d180f2b4c1db8.mount: Deactivated successfully. Feb 13 15:32:31.472933 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505-shm.mount: Deactivated successfully. Feb 13 15:32:31.473049 systemd[1]: run-netns-cni\x2d58cef8cc\x2d6e73\x2de4c3\x2d9ec2\x2d5b420f2d7107.mount: Deactivated successfully. Feb 13 15:32:31.473138 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468-shm.mount: Deactivated successfully. Feb 13 15:32:31.473219 systemd[1]: run-netns-cni\x2d60e9b61f\x2d1097\x2dda55\x2dcdaa\x2d898a131c2575.mount: Deactivated successfully. Feb 13 15:32:31.473291 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3-shm.mount: Deactivated successfully. Feb 13 15:32:31.766016 containerd[1468]: time="2025-02-13T15:32:31.765963947Z" level=error msg="Failed to destroy network for sandbox \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.769916 containerd[1468]: time="2025-02-13T15:32:31.766611644Z" level=error msg="encountered an error cleaning up failed sandbox \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.769916 containerd[1468]: time="2025-02-13T15:32:31.766670335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.770046 kubelet[2663]: E0213 15:32:31.766970 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.770046 kubelet[2663]: E0213 15:32:31.767052 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:31.770046 kubelet[2663]: E0213 15:32:31.767098 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:31.770197 kubelet[2663]: E0213 15:32:31.768080 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd68f4995-bdz7b_calico-apiserver(69e7b0a9-0728-4031-b89b-c2766dc8da1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd68f4995-bdz7b_calico-apiserver(69e7b0a9-0728-4031-b89b-c2766dc8da1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" podUID="69e7b0a9-0728-4031-b89b-c2766dc8da1b" Feb 13 15:32:31.776830 containerd[1468]: time="2025-02-13T15:32:31.776679043Z" level=error msg="Failed to destroy network for sandbox \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.777322 containerd[1468]: time="2025-02-13T15:32:31.777275654Z" level=error msg="encountered an error cleaning up failed sandbox \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.777423 containerd[1468]: time="2025-02-13T15:32:31.777403443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.779732 kubelet[2663]: E0213 15:32:31.779336 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.779732 kubelet[2663]: E0213 15:32:31.779407 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:31.779732 kubelet[2663]: E0213 15:32:31.779431 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:31.779842 kubelet[2663]: E0213 15:32:31.779481 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7466fbdd8f-crh5j_calico-system(6a4955fe-2fb1-4e1b-a558-1a75615b1f9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7466fbdd8f-crh5j_calico-system(6a4955fe-2fb1-4e1b-a558-1a75615b1f9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" podUID="6a4955fe-2fb1-4e1b-a558-1a75615b1f9d" Feb 13 15:32:31.786125 containerd[1468]: time="2025-02-13T15:32:31.786085458Z" level=error msg="Failed to destroy network for sandbox \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.787461 containerd[1468]: time="2025-02-13T15:32:31.787344645Z" level=error msg="encountered an error cleaning up failed sandbox \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.787582 containerd[1468]: time="2025-02-13T15:32:31.787563175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.787939 kubelet[2663]: E0213 15:32:31.787866 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.788010 kubelet[2663]: E0213 15:32:31.787959 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:31.788042 kubelet[2663]: E0213 15:32:31.788013 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:31.788134 kubelet[2663]: E0213 15:32:31.788091 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dczc4_kube-system(f93cafd2-d36c-4948-9ac9-90d542cbe206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dczc4_kube-system(f93cafd2-d36c-4948-9ac9-90d542cbe206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dczc4" podUID="f93cafd2-d36c-4948-9ac9-90d542cbe206" Feb 13 15:32:31.805649 containerd[1468]: time="2025-02-13T15:32:31.805226968Z" level=error msg="Failed to destroy network for sandbox \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.805935 containerd[1468]: time="2025-02-13T15:32:31.805885285Z" level=error msg="Failed to destroy network for sandbox \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.806247 containerd[1468]: time="2025-02-13T15:32:31.806219594Z" level=error msg="encountered an error cleaning up failed sandbox \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.806301 containerd[1468]: time="2025-02-13T15:32:31.806277482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.806519 kubelet[2663]: E0213 15:32:31.806480 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.806592 kubelet[2663]: E0213 15:32:31.806542 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:31.806592 kubelet[2663]: E0213 15:32:31.806562 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:31.806672 kubelet[2663]: E0213 15:32:31.806606 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xbpjc_calico-system(757af110-1c95-44e4-a60e-64cc5c9b9a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xbpjc_calico-system(757af110-1c95-44e4-a60e-64cc5c9b9a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xbpjc" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" Feb 13 15:32:31.807748 containerd[1468]: time="2025-02-13T15:32:31.807716416Z" level=error msg="encountered an error cleaning up failed sandbox \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.807792 containerd[1468]: time="2025-02-13T15:32:31.807761481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.808844 kubelet[2663]: E0213 15:32:31.808673 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.808844 kubelet[2663]: E0213 15:32:31.808731 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:31.808844 kubelet[2663]: E0213 15:32:31.808749 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:31.809001 kubelet[2663]: E0213 15:32:31.808802 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd68f4995-w55w7_calico-apiserver(bde67ccf-8db2-4a00-9ea9-11acd382b495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd68f4995-w55w7_calico-apiserver(bde67ccf-8db2-4a00-9ea9-11acd382b495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" podUID="bde67ccf-8db2-4a00-9ea9-11acd382b495" Feb 13 15:32:31.813200 containerd[1468]: time="2025-02-13T15:32:31.813161979Z" level=error msg="Failed to destroy network for sandbox \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.813532 containerd[1468]: time="2025-02-13T15:32:31.813494484Z" level=error msg="encountered an error cleaning up failed sandbox \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.813532 containerd[1468]: time="2025-02-13T15:32:31.813542916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.813845 kubelet[2663]: E0213 15:32:31.813675 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:31.813845 kubelet[2663]: E0213 15:32:31.813750 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:31.813845 kubelet[2663]: E0213 15:32:31.813768 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:31.814044 kubelet[2663]: E0213 15:32:31.813806 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5jfrp_kube-system(1d0e938c-1376-4e25-a332-b48365cd1ce4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5jfrp_kube-system(1d0e938c-1376-4e25-a332-b48365cd1ce4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5jfrp" podUID="1d0e938c-1376-4e25-a332-b48365cd1ce4" Feb 13 15:32:32.049162 kubelet[2663]: I0213 15:32:32.049047 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:32:32.101693 kubelet[2663]: E0213 15:32:32.049818 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:32.302894 kubelet[2663]: I0213 15:32:32.302649 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d" Feb 13 15:32:32.303874 containerd[1468]: time="2025-02-13T15:32:32.303491739Z" level=info msg="StopPodSandbox for \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\"" Feb 13 15:32:32.303874 containerd[1468]: time="2025-02-13T15:32:32.303677487Z" level=info msg="Ensure that sandbox 66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d in task-service has been cleanup successfully" Feb 13 15:32:32.304285 containerd[1468]: time="2025-02-13T15:32:32.304259502Z" level=info msg="TearDown network for sandbox \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\" successfully" Feb 13 15:32:32.304285 containerd[1468]: time="2025-02-13T15:32:32.304279750Z" level=info msg="StopPodSandbox for \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\" returns successfully" Feb 13 15:32:32.307977 kubelet[2663]: I0213 15:32:32.307921 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b" Feb 13 15:32:32.309040 containerd[1468]: time="2025-02-13T15:32:32.309003455Z" level=info msg="StopPodSandbox for \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\"" Feb 13 15:32:32.309247 containerd[1468]: time="2025-02-13T15:32:32.309223198Z" level=info msg="Ensure that sandbox 5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b in task-service has been cleanup successfully" Feb 13 15:32:32.310057 containerd[1468]: time="2025-02-13T15:32:32.310023912Z" level=info msg="TearDown network for sandbox \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\" successfully" Feb 13 15:32:32.311388 containerd[1468]: time="2025-02-13T15:32:32.311032808Z" level=info msg="StopPodSandbox for \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\" returns successfully" Feb 13 15:32:32.311504 containerd[1468]: time="2025-02-13T15:32:32.311470370Z" level=info msg="StopPodSandbox for \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\"" Feb 13 15:32:32.311652 containerd[1468]: time="2025-02-13T15:32:32.311570168Z" level=info msg="TearDown network for sandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\" successfully" Feb 13 15:32:32.311652 containerd[1468]: time="2025-02-13T15:32:32.311624370Z" level=info msg="StopPodSandbox for \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\" returns successfully" Feb 13 15:32:32.311728 containerd[1468]: time="2025-02-13T15:32:32.311392474Z" level=info msg="StopPodSandbox for \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\"" Feb 13 15:32:32.311839 containerd[1468]: time="2025-02-13T15:32:32.311802193Z" level=info msg="TearDown network for sandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\" successfully" Feb 13 15:32:32.311879 containerd[1468]: time="2025-02-13T15:32:32.311836638Z" level=info msg="StopPodSandbox for \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\" returns successfully" Feb 13 15:32:32.312379 containerd[1468]: time="2025-02-13T15:32:32.312275252Z" level=info msg="StopPodSandbox for \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\"" Feb 13 15:32:32.312379 containerd[1468]: time="2025-02-13T15:32:32.312315037Z" level=info msg="StopPodSandbox for \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\"" Feb 13 15:32:32.312379 containerd[1468]: time="2025-02-13T15:32:32.312377775Z" level=info msg="TearDown network for sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\" successfully" Feb 13 15:32:32.312462 containerd[1468]: time="2025-02-13T15:32:32.312387403Z" level=info msg="StopPodSandbox for \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\" returns successfully" Feb 13 15:32:32.312545 containerd[1468]: time="2025-02-13T15:32:32.312526234Z" level=info msg="TearDown network for sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\" successfully" Feb 13 15:32:32.312545 containerd[1468]: time="2025-02-13T15:32:32.312540220Z" level=info msg="StopPodSandbox for \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\" returns successfully" Feb 13 15:32:32.312797 containerd[1468]: time="2025-02-13T15:32:32.312766926Z" level=info msg="StopPodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\"" Feb 13 15:32:32.312797 containerd[1468]: time="2025-02-13T15:32:32.312777296Z" level=info msg="StopPodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\"" Feb 13 15:32:32.312797 containerd[1468]: time="2025-02-13T15:32:32.312859239Z" level=info msg="TearDown network for sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" successfully" Feb 13 15:32:32.312797 containerd[1468]: time="2025-02-13T15:32:32.312869849Z" level=info msg="StopPodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" returns successfully" Feb 13 15:32:32.312797 containerd[1468]: time="2025-02-13T15:32:32.312938919Z" level=info msg="TearDown network for sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" successfully" Feb 13 15:32:32.312797 containerd[1468]: time="2025-02-13T15:32:32.312949589Z" level=info msg="StopPodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" returns successfully" Feb 13 15:32:32.313272 kubelet[2663]: I0213 15:32:32.312998 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587" Feb 13 15:32:32.313466 containerd[1468]: time="2025-02-13T15:32:32.313445130Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\"" Feb 13 15:32:32.313651 containerd[1468]: time="2025-02-13T15:32:32.313623295Z" level=info msg="TearDown network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" successfully" Feb 13 15:32:32.313686 containerd[1468]: time="2025-02-13T15:32:32.313641479Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" returns successfully" Feb 13 15:32:32.313711 containerd[1468]: time="2025-02-13T15:32:32.313697274Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\"" Feb 13 15:32:32.313732 containerd[1468]: time="2025-02-13T15:32:32.313713565Z" level=info msg="StopPodSandbox for \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\"" Feb 13 15:32:32.313779 containerd[1468]: time="2025-02-13T15:32:32.313760142Z" level=info msg="TearDown network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" successfully" Feb 13 15:32:32.313779 containerd[1468]: time="2025-02-13T15:32:32.313775621Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" returns successfully" Feb 13 15:32:32.314317 containerd[1468]: time="2025-02-13T15:32:32.313876972Z" level=info msg="Ensure that sandbox cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587 in task-service has been cleanup successfully" Feb 13 15:32:32.314317 containerd[1468]: time="2025-02-13T15:32:32.314048063Z" level=info msg="TearDown network for sandbox \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\" successfully" Feb 13 15:32:32.314317 containerd[1468]: time="2025-02-13T15:32:32.314060025Z" level=info msg="StopPodSandbox for \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\" returns successfully" Feb 13 15:32:32.314412 containerd[1468]: time="2025-02-13T15:32:32.314388874Z" level=info msg="StopPodSandbox for \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\"" Feb 13 15:32:32.314445 containerd[1468]: time="2025-02-13T15:32:32.314413691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:5,}" Feb 13 15:32:32.314493 containerd[1468]: time="2025-02-13T15:32:32.314469866Z" level=info msg="TearDown network for sandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\" successfully" Feb 13 15:32:32.314493 containerd[1468]: time="2025-02-13T15:32:32.314490505Z" level=info msg="StopPodSandbox for \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\" returns successfully" Feb 13 15:32:32.314574 containerd[1468]: time="2025-02-13T15:32:32.314553924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:5,}" Feb 13 15:32:32.314898 containerd[1468]: time="2025-02-13T15:32:32.314876049Z" level=info msg="StopPodSandbox for \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\"" Feb 13 15:32:32.314994 containerd[1468]: time="2025-02-13T15:32:32.314961610Z" level=info msg="TearDown network for sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\" successfully" Feb 13 15:32:32.314994 containerd[1468]: time="2025-02-13T15:32:32.314976208Z" level=info msg="StopPodSandbox for \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\" returns successfully" Feb 13 15:32:32.315322 containerd[1468]: time="2025-02-13T15:32:32.315298042Z" level=info msg="StopPodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\"" Feb 13 15:32:32.315652 containerd[1468]: time="2025-02-13T15:32:32.315581986Z" level=info msg="TearDown network for sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" successfully" Feb 13 15:32:32.315652 containerd[1468]: time="2025-02-13T15:32:32.315595791Z" level=info msg="StopPodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" returns successfully" Feb 13 15:32:32.316070 containerd[1468]: time="2025-02-13T15:32:32.315938235Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\"" Feb 13 15:32:32.316070 containerd[1468]: time="2025-02-13T15:32:32.316020188Z" level=info msg="TearDown network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" successfully" Feb 13 15:32:32.316070 containerd[1468]: time="2025-02-13T15:32:32.316029426Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" returns successfully" Feb 13 15:32:32.317182 containerd[1468]: time="2025-02-13T15:32:32.317039804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:32:32.319250 kubelet[2663]: I0213 15:32:32.319138 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6" Feb 13 15:32:32.320029 containerd[1468]: time="2025-02-13T15:32:32.319998333Z" level=info msg="StopPodSandbox for \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\"" Feb 13 15:32:32.321466 containerd[1468]: time="2025-02-13T15:32:32.320178692Z" level=info msg="Ensure that sandbox 4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6 in task-service has been cleanup successfully" Feb 13 15:32:32.321466 containerd[1468]: time="2025-02-13T15:32:32.320879589Z" level=info msg="TearDown network for sandbox \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\" successfully" Feb 13 15:32:32.321466 containerd[1468]: time="2025-02-13T15:32:32.320891361Z" level=info msg="StopPodSandbox for \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\" returns successfully" Feb 13 15:32:32.322686 containerd[1468]: time="2025-02-13T15:32:32.322657460Z" level=info msg="StopPodSandbox for \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\"" Feb 13 15:32:32.322760 containerd[1468]: time="2025-02-13T15:32:32.322741056Z" level=info msg="TearDown network for sandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\" successfully" Feb 13 15:32:32.322760 containerd[1468]: time="2025-02-13T15:32:32.322751516Z" level=info msg="StopPodSandbox for \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\" returns successfully" Feb 13 15:32:32.322970 containerd[1468]: time="2025-02-13T15:32:32.322950159Z" level=info msg="StopPodSandbox for \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\"" Feb 13 15:32:32.323045 containerd[1468]: time="2025-02-13T15:32:32.323028747Z" level=info msg="TearDown network for sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\" successfully" Feb 13 15:32:32.323045 containerd[1468]: time="2025-02-13T15:32:32.323042392Z" level=info msg="StopPodSandbox for \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\" returns successfully" Feb 13 15:32:32.323284 kubelet[2663]: I0213 15:32:32.323248 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801" Feb 13 15:32:32.323373 containerd[1468]: time="2025-02-13T15:32:32.323322288Z" level=info msg="StopPodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\"" Feb 13 15:32:32.323414 containerd[1468]: time="2025-02-13T15:32:32.323397841Z" level=info msg="TearDown network for sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" successfully" Feb 13 15:32:32.323465 containerd[1468]: time="2025-02-13T15:32:32.323412188Z" level=info msg="StopPodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" returns successfully" Feb 13 15:32:32.323617 containerd[1468]: time="2025-02-13T15:32:32.323579612Z" level=info msg="StopPodSandbox for \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\"" Feb 13 15:32:32.323734 containerd[1468]: time="2025-02-13T15:32:32.323715959Z" level=info msg="Ensure that sandbox a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801 in task-service has been cleanup successfully" Feb 13 15:32:32.323871 containerd[1468]: time="2025-02-13T15:32:32.323853567Z" level=info msg="TearDown network for sandbox \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\" successfully" Feb 13 15:32:32.323871 containerd[1468]: time="2025-02-13T15:32:32.323867343Z" level=info msg="StopPodSandbox for \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\" returns successfully" Feb 13 15:32:32.323988 containerd[1468]: time="2025-02-13T15:32:32.323963594Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\"" Feb 13 15:32:32.324063 containerd[1468]: time="2025-02-13T15:32:32.324033245Z" level=info msg="TearDown network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" successfully" Feb 13 15:32:32.324063 containerd[1468]: time="2025-02-13T15:32:32.324045087Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" returns successfully" Feb 13 15:32:32.324260 containerd[1468]: time="2025-02-13T15:32:32.324226548Z" level=info msg="StopPodSandbox for \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\"" Feb 13 15:32:32.324453 containerd[1468]: time="2025-02-13T15:32:32.324296960Z" level=info msg="TearDown network for sandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\" successfully" Feb 13 15:32:32.324453 containerd[1468]: time="2025-02-13T15:32:32.324305566Z" level=info msg="StopPodSandbox for \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\" returns successfully" Feb 13 15:32:32.324507 kubelet[2663]: E0213 15:32:32.324301 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:32.324535 containerd[1468]: time="2025-02-13T15:32:32.324475144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:5,}" Feb 13 15:32:32.324895 containerd[1468]: time="2025-02-13T15:32:32.324859527Z" level=info msg="StopPodSandbox for \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\"" Feb 13 15:32:32.325252 containerd[1468]: time="2025-02-13T15:32:32.325218882Z" level=info msg="TearDown network for sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\" successfully" Feb 13 15:32:32.325252 containerd[1468]: time="2025-02-13T15:32:32.325241174Z" level=info msg="StopPodSandbox for \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\" returns successfully" Feb 13 15:32:32.325634 containerd[1468]: time="2025-02-13T15:32:32.325563650Z" level=info msg="StopPodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\"" Feb 13 15:32:32.325634 containerd[1468]: time="2025-02-13T15:32:32.325633021Z" level=info msg="TearDown network for sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" successfully" Feb 13 15:32:32.326026 containerd[1468]: time="2025-02-13T15:32:32.325642218Z" level=info msg="StopPodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" returns successfully" Feb 13 15:32:32.326026 containerd[1468]: time="2025-02-13T15:32:32.325770338Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\"" Feb 13 15:32:32.326026 containerd[1468]: time="2025-02-13T15:32:32.325832054Z" level=info msg="TearDown network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" successfully" Feb 13 15:32:32.326026 containerd[1468]: time="2025-02-13T15:32:32.325840490Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" returns successfully" Feb 13 15:32:32.326490 kubelet[2663]: E0213 15:32:32.325943 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:32.326490 kubelet[2663]: E0213 15:32:32.326489 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:32.326541 containerd[1468]: time="2025-02-13T15:32:32.326071053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:5,}" Feb 13 15:32:32.326576 kubelet[2663]: I0213 15:32:32.326535 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3" Feb 13 15:32:32.326847 containerd[1468]: time="2025-02-13T15:32:32.326821023Z" level=info msg="StopPodSandbox for \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\"" Feb 13 15:32:32.327009 containerd[1468]: time="2025-02-13T15:32:32.326984479Z" level=info msg="Ensure that sandbox 322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3 in task-service has been cleanup successfully" Feb 13 15:32:32.327320 containerd[1468]: time="2025-02-13T15:32:32.327259416Z" level=info msg="TearDown network for sandbox \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\" successfully" Feb 13 15:32:32.327320 containerd[1468]: time="2025-02-13T15:32:32.327275165Z" level=info msg="StopPodSandbox for \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\" returns successfully" Feb 13 15:32:32.327523 containerd[1468]: time="2025-02-13T15:32:32.327503144Z" level=info msg="StopPodSandbox for \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\"" Feb 13 15:32:32.327590 containerd[1468]: time="2025-02-13T15:32:32.327572113Z" level=info msg="TearDown network for sandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\" successfully" Feb 13 15:32:32.327590 containerd[1468]: time="2025-02-13T15:32:32.327584808Z" level=info msg="StopPodSandbox for \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\" returns successfully" Feb 13 15:32:32.327781 containerd[1468]: time="2025-02-13T15:32:32.327760237Z" level=info msg="StopPodSandbox for \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\"" Feb 13 15:32:32.327871 containerd[1468]: time="2025-02-13T15:32:32.327852720Z" level=info msg="TearDown network for sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\" successfully" Feb 13 15:32:32.327871 containerd[1468]: time="2025-02-13T15:32:32.327865525Z" level=info msg="StopPodSandbox for \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\" returns successfully" Feb 13 15:32:32.328076 containerd[1468]: time="2025-02-13T15:32:32.328053348Z" level=info msg="StopPodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\"" Feb 13 15:32:32.328154 containerd[1468]: time="2025-02-13T15:32:32.328136203Z" level=info msg="TearDown network for sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" successfully" Feb 13 15:32:32.328154 containerd[1468]: time="2025-02-13T15:32:32.328149078Z" level=info msg="StopPodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" returns successfully" Feb 13 15:32:32.328386 containerd[1468]: time="2025-02-13T15:32:32.328363130Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\"" Feb 13 15:32:32.328450 containerd[1468]: time="2025-02-13T15:32:32.328431449Z" level=info msg="TearDown network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" successfully" Feb 13 15:32:32.328450 containerd[1468]: time="2025-02-13T15:32:32.328443712Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" returns successfully" Feb 13 15:32:32.328711 containerd[1468]: time="2025-02-13T15:32:32.328691196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:32:32.471231 systemd[1]: run-netns-cni\x2daefa8dd1\x2d768b\x2dcb43\x2d3f24\x2d9e9a2fedb73f.mount: Deactivated successfully. Feb 13 15:32:32.471348 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6-shm.mount: Deactivated successfully. Feb 13 15:32:32.471424 systemd[1]: run-netns-cni\x2dd4ed587d\x2d07e0\x2d61e0\x2d9e42\x2dd06207ae7b86.mount: Deactivated successfully. Feb 13 15:32:32.471497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587-shm.mount: Deactivated successfully. Feb 13 15:32:32.471570 systemd[1]: run-netns-cni\x2d47acaafe\x2d42e1\x2d1eab\x2db1e6\x2d283a741c4ce5.mount: Deactivated successfully. Feb 13 15:32:32.471658 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801-shm.mount: Deactivated successfully. Feb 13 15:32:32.471744 systemd[1]: run-netns-cni\x2d43753d20\x2d5905\x2d5528\x2d41ba\x2df4297ef3467e.mount: Deactivated successfully. Feb 13 15:32:32.471819 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b-shm.mount: Deactivated successfully. Feb 13 15:32:32.748731 containerd[1468]: time="2025-02-13T15:32:32.748677583Z" level=error msg="Failed to destroy network for sandbox \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.749446 containerd[1468]: time="2025-02-13T15:32:32.749410630Z" level=error msg="encountered an error cleaning up failed sandbox \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.749499 containerd[1468]: time="2025-02-13T15:32:32.749471785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.749940 kubelet[2663]: E0213 15:32:32.749761 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.749940 kubelet[2663]: E0213 15:32:32.749822 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:32.749940 kubelet[2663]: E0213 15:32:32.749850 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xbpjc" Feb 13 15:32:32.750110 kubelet[2663]: E0213 15:32:32.749887 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xbpjc_calico-system(757af110-1c95-44e4-a60e-64cc5c9b9a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xbpjc_calico-system(757af110-1c95-44e4-a60e-64cc5c9b9a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xbpjc" podUID="757af110-1c95-44e4-a60e-64cc5c9b9a1e" Feb 13 15:32:32.773022 containerd[1468]: time="2025-02-13T15:32:32.772976358Z" level=error msg="Failed to destroy network for sandbox \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.773752 containerd[1468]: time="2025-02-13T15:32:32.773203926Z" level=error msg="Failed to destroy network for sandbox \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.773752 containerd[1468]: time="2025-02-13T15:32:32.773566267Z" level=error msg="encountered an error cleaning up failed sandbox \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.773752 containerd[1468]: time="2025-02-13T15:32:32.773621560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.773752 containerd[1468]: time="2025-02-13T15:32:32.773630196Z" level=error msg="encountered an error cleaning up failed sandbox \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.773929 kubelet[2663]: E0213 15:32:32.773856 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.773974 kubelet[2663]: E0213 15:32:32.773937 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:32.773974 kubelet[2663]: E0213 15:32:32.773958 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" Feb 13 15:32:32.774017 kubelet[2663]: E0213 15:32:32.773998 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd68f4995-w55w7_calico-apiserver(bde67ccf-8db2-4a00-9ea9-11acd382b495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd68f4995-w55w7_calico-apiserver(bde67ccf-8db2-4a00-9ea9-11acd382b495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" podUID="bde67ccf-8db2-4a00-9ea9-11acd382b495" Feb 13 15:32:32.774749 containerd[1468]: time="2025-02-13T15:32:32.773694687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.774817 containerd[1468]: time="2025-02-13T15:32:32.773722089Z" level=error msg="Failed to destroy network for sandbox \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.775354 kubelet[2663]: E0213 15:32:32.774854 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.775354 kubelet[2663]: E0213 15:32:32.774935 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:32.775354 kubelet[2663]: E0213 15:32:32.774952 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" Feb 13 15:32:32.775451 containerd[1468]: time="2025-02-13T15:32:32.775285627Z" level=error msg="encountered an error cleaning up failed sandbox \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.775451 containerd[1468]: time="2025-02-13T15:32:32.775345239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.775499 kubelet[2663]: E0213 15:32:32.774981 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd68f4995-bdz7b_calico-apiserver(69e7b0a9-0728-4031-b89b-c2766dc8da1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd68f4995-bdz7b_calico-apiserver(69e7b0a9-0728-4031-b89b-c2766dc8da1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" podUID="69e7b0a9-0728-4031-b89b-c2766dc8da1b" Feb 13 15:32:32.775499 kubelet[2663]: E0213 15:32:32.775486 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.775576 kubelet[2663]: E0213 15:32:32.775515 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:32.775576 kubelet[2663]: E0213 15:32:32.775531 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dczc4" Feb 13 15:32:32.775576 kubelet[2663]: E0213 15:32:32.775556 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dczc4_kube-system(f93cafd2-d36c-4948-9ac9-90d542cbe206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dczc4_kube-system(f93cafd2-d36c-4948-9ac9-90d542cbe206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dczc4" podUID="f93cafd2-d36c-4948-9ac9-90d542cbe206" Feb 13 15:32:32.786723 containerd[1468]: time="2025-02-13T15:32:32.786672131Z" level=error msg="Failed to destroy network for sandbox \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.787082 containerd[1468]: time="2025-02-13T15:32:32.787057385Z" level=error msg="encountered an error cleaning up failed sandbox \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.787147 containerd[1468]: time="2025-02-13T15:32:32.787126274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.787362 kubelet[2663]: E0213 15:32:32.787326 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.787399 kubelet[2663]: E0213 15:32:32.787379 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:32.787424 kubelet[2663]: E0213 15:32:32.787400 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" Feb 13 15:32:32.787459 kubelet[2663]: E0213 15:32:32.787440 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7466fbdd8f-crh5j_calico-system(6a4955fe-2fb1-4e1b-a558-1a75615b1f9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7466fbdd8f-crh5j_calico-system(6a4955fe-2fb1-4e1b-a558-1a75615b1f9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" podUID="6a4955fe-2fb1-4e1b-a558-1a75615b1f9d" Feb 13 15:32:32.801777 containerd[1468]: time="2025-02-13T15:32:32.801738549Z" level=error msg="Failed to destroy network for sandbox \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.802114 containerd[1468]: time="2025-02-13T15:32:32.802076785Z" level=error msg="encountered an error cleaning up failed sandbox \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.802148 containerd[1468]: time="2025-02-13T15:32:32.802136979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.802360 kubelet[2663]: E0213 15:32:32.802313 2663 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:32.802500 kubelet[2663]: E0213 15:32:32.802376 2663 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:32.802500 kubelet[2663]: E0213 15:32:32.802398 2663 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5jfrp" Feb 13 15:32:32.802500 kubelet[2663]: E0213 15:32:32.802448 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5jfrp_kube-system(1d0e938c-1376-4e25-a332-b48365cd1ce4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5jfrp_kube-system(1d0e938c-1376-4e25-a332-b48365cd1ce4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5jfrp" podUID="1d0e938c-1376-4e25-a332-b48365cd1ce4" Feb 13 15:32:32.827495 containerd[1468]: time="2025-02-13T15:32:32.827445751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:32.828225 containerd[1468]: time="2025-02-13T15:32:32.828190801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 15:32:32.829254 containerd[1468]: time="2025-02-13T15:32:32.829209695Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:32.831095 containerd[1468]: time="2025-02-13T15:32:32.831060152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:32.831609 containerd[1468]: time="2025-02-13T15:32:32.831584567Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.605911436s" Feb 13 15:32:32.831646 containerd[1468]: time="2025-02-13T15:32:32.831612439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 15:32:32.838931 containerd[1468]: time="2025-02-13T15:32:32.838884804Z" level=info msg="CreateContainer within sandbox \"4dc80ff188280b7b49834efa7ffe2768f884f75debf10b15fbdd1567a7d794ca\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:32:32.862369 containerd[1468]: time="2025-02-13T15:32:32.862327471Z" level=info msg="CreateContainer within sandbox \"4dc80ff188280b7b49834efa7ffe2768f884f75debf10b15fbdd1567a7d794ca\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6f95138dffcd1dc28762e9cddc1cfeb5e5b0ea5ee1d4f9212c7bf0ea9677b326\"" Feb 13 15:32:32.862930 containerd[1468]: time="2025-02-13T15:32:32.862762718Z" level=info msg="StartContainer for \"6f95138dffcd1dc28762e9cddc1cfeb5e5b0ea5ee1d4f9212c7bf0ea9677b326\"" Feb 13 15:32:32.931031 systemd[1]: Started cri-containerd-6f95138dffcd1dc28762e9cddc1cfeb5e5b0ea5ee1d4f9212c7bf0ea9677b326.scope - libcontainer container 6f95138dffcd1dc28762e9cddc1cfeb5e5b0ea5ee1d4f9212c7bf0ea9677b326. Feb 13 15:32:32.962597 containerd[1468]: time="2025-02-13T15:32:32.962557784Z" level=info msg="StartContainer for \"6f95138dffcd1dc28762e9cddc1cfeb5e5b0ea5ee1d4f9212c7bf0ea9677b326\" returns successfully" Feb 13 15:32:33.024702 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:32:33.025333 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:32:33.330538 kubelet[2663]: I0213 15:32:33.330503 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e" Feb 13 15:32:33.331116 containerd[1468]: time="2025-02-13T15:32:33.331072226Z" level=info msg="StopPodSandbox for \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\"" Feb 13 15:32:33.331574 containerd[1468]: time="2025-02-13T15:32:33.331299002Z" level=info msg="Ensure that sandbox 43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e in task-service has been cleanup successfully" Feb 13 15:32:33.331574 containerd[1468]: time="2025-02-13T15:32:33.331485382Z" level=info msg="TearDown network for sandbox \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\" successfully" Feb 13 15:32:33.331574 containerd[1468]: time="2025-02-13T15:32:33.331496062Z" level=info msg="StopPodSandbox for \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\" returns successfully" Feb 13 15:32:33.331932 containerd[1468]: time="2025-02-13T15:32:33.331857912Z" level=info msg="StopPodSandbox for \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\"" Feb 13 15:32:33.332095 containerd[1468]: time="2025-02-13T15:32:33.332023554Z" level=info msg="TearDown network for sandbox \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\" successfully" Feb 13 15:32:33.332095 containerd[1468]: time="2025-02-13T15:32:33.332091481Z" level=info msg="StopPodSandbox for \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\" returns successfully" Feb 13 15:32:33.332607 containerd[1468]: time="2025-02-13T15:32:33.332584818Z" level=info msg="StopPodSandbox for \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\"" Feb 13 15:32:33.332702 containerd[1468]: time="2025-02-13T15:32:33.332661101Z" level=info msg="TearDown network for sandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\" successfully" Feb 13 15:32:33.332738 containerd[1468]: time="2025-02-13T15:32:33.332701227Z" level=info msg="StopPodSandbox for \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\" returns successfully" Feb 13 15:32:33.333003 containerd[1468]: time="2025-02-13T15:32:33.332973137Z" level=info msg="StopPodSandbox for \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\"" Feb 13 15:32:33.333141 containerd[1468]: time="2025-02-13T15:32:33.333049350Z" level=info msg="TearDown network for sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\" successfully" Feb 13 15:32:33.333141 containerd[1468]: time="2025-02-13T15:32:33.333058498Z" level=info msg="StopPodSandbox for \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\" returns successfully" Feb 13 15:32:33.333489 containerd[1468]: time="2025-02-13T15:32:33.333404267Z" level=info msg="StopPodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\"" Feb 13 15:32:33.333532 containerd[1468]: time="2025-02-13T15:32:33.333503374Z" level=info msg="TearDown network for sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" successfully" Feb 13 15:32:33.333532 containerd[1468]: time="2025-02-13T15:32:33.333513503Z" level=info msg="StopPodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" returns successfully" Feb 13 15:32:33.333756 containerd[1468]: time="2025-02-13T15:32:33.333729758Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\"" Feb 13 15:32:33.333818 containerd[1468]: time="2025-02-13T15:32:33.333802035Z" level=info msg="TearDown network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" successfully" Feb 13 15:32:33.333818 containerd[1468]: time="2025-02-13T15:32:33.333814859Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" returns successfully" Feb 13 15:32:33.334027 kubelet[2663]: E0213 15:32:33.334004 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:33.334263 kubelet[2663]: I0213 15:32:33.334231 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d" Feb 13 15:32:33.334567 containerd[1468]: time="2025-02-13T15:32:33.334538508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:6,}" Feb 13 15:32:33.335255 containerd[1468]: time="2025-02-13T15:32:33.334857016Z" level=info msg="StopPodSandbox for \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\"" Feb 13 15:32:33.335255 containerd[1468]: time="2025-02-13T15:32:33.335104391Z" level=info msg="Ensure that sandbox 805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d in task-service has been cleanup successfully" Feb 13 15:32:33.335809 containerd[1468]: time="2025-02-13T15:32:33.335328573Z" level=info msg="TearDown network for sandbox \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\" successfully" Feb 13 15:32:33.335809 containerd[1468]: time="2025-02-13T15:32:33.335356555Z" level=info msg="StopPodSandbox for \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\" returns successfully" Feb 13 15:32:33.335809 containerd[1468]: time="2025-02-13T15:32:33.335561710Z" level=info msg="StopPodSandbox for \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\"" Feb 13 15:32:33.335809 containerd[1468]: time="2025-02-13T15:32:33.335635249Z" level=info msg="TearDown network for sandbox \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\" successfully" Feb 13 15:32:33.335809 containerd[1468]: time="2025-02-13T15:32:33.335644556Z" level=info msg="StopPodSandbox for \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\" returns successfully" Feb 13 15:32:33.335952 containerd[1468]: time="2025-02-13T15:32:33.335867815Z" level=info msg="StopPodSandbox for \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\"" Feb 13 15:32:33.336030 containerd[1468]: time="2025-02-13T15:32:33.336009722Z" level=info msg="TearDown network for sandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\" successfully" Feb 13 15:32:33.336030 containerd[1468]: time="2025-02-13T15:32:33.336026654Z" level=info msg="StopPodSandbox for \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\" returns successfully" Feb 13 15:32:33.336285 containerd[1468]: time="2025-02-13T15:32:33.336266415Z" level=info msg="StopPodSandbox for \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\"" Feb 13 15:32:33.336352 containerd[1468]: time="2025-02-13T15:32:33.336336506Z" level=info msg="TearDown network for sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\" successfully" Feb 13 15:32:33.336352 containerd[1468]: time="2025-02-13T15:32:33.336349390Z" level=info msg="StopPodSandbox for \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\" returns successfully" Feb 13 15:32:33.336878 containerd[1468]: time="2025-02-13T15:32:33.336759681Z" level=info msg="StopPodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\"" Feb 13 15:32:33.336878 containerd[1468]: time="2025-02-13T15:32:33.336828901Z" level=info msg="TearDown network for sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" successfully" Feb 13 15:32:33.336878 containerd[1468]: time="2025-02-13T15:32:33.336837518Z" level=info msg="StopPodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" returns successfully" Feb 13 15:32:33.337204 containerd[1468]: time="2025-02-13T15:32:33.337176414Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\"" Feb 13 15:32:33.337374 containerd[1468]: time="2025-02-13T15:32:33.337313752Z" level=info msg="TearDown network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" successfully" Feb 13 15:32:33.337374 containerd[1468]: time="2025-02-13T15:32:33.337328159Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" returns successfully" Feb 13 15:32:33.337519 kubelet[2663]: E0213 15:32:33.337501 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:33.338031 kubelet[2663]: E0213 15:32:33.337748 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:33.338087 containerd[1468]: time="2025-02-13T15:32:33.337827397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:6,}" Feb 13 15:32:33.343013 kubelet[2663]: I0213 15:32:33.342648 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60" Feb 13 15:32:33.345893 containerd[1468]: time="2025-02-13T15:32:33.345825443Z" level=info msg="StopPodSandbox for \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\"" Feb 13 15:32:33.346387 containerd[1468]: time="2025-02-13T15:32:33.346220415Z" level=info msg="Ensure that sandbox d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60 in task-service has been cleanup successfully" Feb 13 15:32:33.346540 containerd[1468]: time="2025-02-13T15:32:33.346495422Z" level=info msg="TearDown network for sandbox \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\" successfully" Feb 13 15:32:33.346636 containerd[1468]: time="2025-02-13T15:32:33.346621348Z" level=info msg="StopPodSandbox for \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\" returns successfully" Feb 13 15:32:33.347008 kubelet[2663]: I0213 15:32:33.346983 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b" Feb 13 15:32:33.347653 containerd[1468]: time="2025-02-13T15:32:33.347476304Z" level=info msg="StopPodSandbox for \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\"" Feb 13 15:32:33.347653 containerd[1468]: time="2025-02-13T15:32:33.347554491Z" level=info msg="TearDown network for sandbox \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\" successfully" Feb 13 15:32:33.347653 containerd[1468]: time="2025-02-13T15:32:33.347592473Z" level=info msg="StopPodSandbox for \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\" returns successfully" Feb 13 15:32:33.347653 containerd[1468]: time="2025-02-13T15:32:33.347481584Z" level=info msg="StopPodSandbox for \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\"" Feb 13 15:32:33.347896 containerd[1468]: time="2025-02-13T15:32:33.347746071Z" level=info msg="Ensure that sandbox 0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b in task-service has been cleanup successfully" Feb 13 15:32:33.348454 containerd[1468]: time="2025-02-13T15:32:33.348286677Z" level=info msg="StopPodSandbox for \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\"" Feb 13 15:32:33.348454 containerd[1468]: time="2025-02-13T15:32:33.348376716Z" level=info msg="TearDown network for sandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\" successfully" Feb 13 15:32:33.348454 containerd[1468]: time="2025-02-13T15:32:33.348389700Z" level=info msg="StopPodSandbox for \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\" returns successfully" Feb 13 15:32:33.348666 containerd[1468]: time="2025-02-13T15:32:33.348613591Z" level=info msg="TearDown network for sandbox \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\" successfully" Feb 13 15:32:33.348666 containerd[1468]: time="2025-02-13T15:32:33.348631765Z" level=info msg="StopPodSandbox for \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\" returns successfully" Feb 13 15:32:33.348998 containerd[1468]: time="2025-02-13T15:32:33.348720451Z" level=info msg="StopPodSandbox for \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\"" Feb 13 15:32:33.348998 containerd[1468]: time="2025-02-13T15:32:33.348797386Z" level=info msg="TearDown network for sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\" successfully" Feb 13 15:32:33.348998 containerd[1468]: time="2025-02-13T15:32:33.348807355Z" level=info msg="StopPodSandbox for \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\" returns successfully" Feb 13 15:32:33.349166 containerd[1468]: time="2025-02-13T15:32:33.349118620Z" level=info msg="StopPodSandbox for \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\"" Feb 13 15:32:33.349166 containerd[1468]: time="2025-02-13T15:32:33.349137545Z" level=info msg="StopPodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\"" Feb 13 15:32:33.349305 containerd[1468]: time="2025-02-13T15:32:33.349200403Z" level=info msg="TearDown network for sandbox \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\" successfully" Feb 13 15:32:33.349305 containerd[1468]: time="2025-02-13T15:32:33.349215101Z" level=info msg="StopPodSandbox for \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\" returns successfully" Feb 13 15:32:33.349305 containerd[1468]: time="2025-02-13T15:32:33.349242803Z" level=info msg="TearDown network for sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" successfully" Feb 13 15:32:33.349305 containerd[1468]: time="2025-02-13T15:32:33.349253553Z" level=info msg="StopPodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" returns successfully" Feb 13 15:32:33.349577 containerd[1468]: time="2025-02-13T15:32:33.349551262Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\"" Feb 13 15:32:33.349577 containerd[1468]: time="2025-02-13T15:32:33.349568194Z" level=info msg="StopPodSandbox for \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\"" Feb 13 15:32:33.349654 containerd[1468]: time="2025-02-13T15:32:33.349636753Z" level=info msg="TearDown network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" successfully" Feb 13 15:32:33.349654 containerd[1468]: time="2025-02-13T15:32:33.349651340Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" returns successfully" Feb 13 15:32:33.349700 containerd[1468]: time="2025-02-13T15:32:33.349656660Z" level=info msg="TearDown network for sandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\" successfully" Feb 13 15:32:33.349700 containerd[1468]: time="2025-02-13T15:32:33.349668183Z" level=info msg="StopPodSandbox for \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\" returns successfully" Feb 13 15:32:33.350001 containerd[1468]: time="2025-02-13T15:32:33.349976121Z" level=info msg="StopPodSandbox for \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\"" Feb 13 15:32:33.350179 containerd[1468]: time="2025-02-13T15:32:33.350124960Z" level=info msg="TearDown network for sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\" successfully" Feb 13 15:32:33.350179 containerd[1468]: time="2025-02-13T15:32:33.350138776Z" level=info msg="StopPodSandbox for \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\" returns successfully" Feb 13 15:32:33.350228 containerd[1468]: time="2025-02-13T15:32:33.350194360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:32:33.350468 containerd[1468]: time="2025-02-13T15:32:33.350444951Z" level=info msg="StopPodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\"" Feb 13 15:32:33.350551 containerd[1468]: time="2025-02-13T15:32:33.350525312Z" level=info msg="TearDown network for sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" successfully" Feb 13 15:32:33.350551 containerd[1468]: time="2025-02-13T15:32:33.350540460Z" level=info msg="StopPodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" returns successfully" Feb 13 15:32:33.350777 containerd[1468]: time="2025-02-13T15:32:33.350753711Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\"" Feb 13 15:32:33.350885 containerd[1468]: time="2025-02-13T15:32:33.350850733Z" level=info msg="TearDown network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" successfully" Feb 13 15:32:33.350885 containerd[1468]: time="2025-02-13T15:32:33.350869439Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" returns successfully" Feb 13 15:32:33.351466 containerd[1468]: time="2025-02-13T15:32:33.351285180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:6,}" Feb 13 15:32:33.351953 kubelet[2663]: I0213 15:32:33.351892 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f" Feb 13 15:32:33.354470 containerd[1468]: time="2025-02-13T15:32:33.354429106Z" level=info msg="StopPodSandbox for \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\"" Feb 13 15:32:33.354626 containerd[1468]: time="2025-02-13T15:32:33.354608173Z" level=info msg="Ensure that sandbox 34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f in task-service has been cleanup successfully" Feb 13 15:32:33.354739 kubelet[2663]: I0213 15:32:33.354702 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-svldr" podStartSLOduration=1.385368187 podStartE2EDuration="17.354680559s" podCreationTimestamp="2025-02-13 15:32:16 +0000 UTC" firstStartedPulling="2025-02-13 15:32:16.862855341 +0000 UTC m=+22.821891936" lastFinishedPulling="2025-02-13 15:32:32.832167713 +0000 UTC m=+38.791204308" observedRunningTime="2025-02-13 15:32:33.351819855 +0000 UTC m=+39.310856450" watchObservedRunningTime="2025-02-13 15:32:33.354680559 +0000 UTC m=+39.313717154" Feb 13 15:32:33.354977 containerd[1468]: time="2025-02-13T15:32:33.354863452Z" level=info msg="TearDown network for sandbox \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\" successfully" Feb 13 15:32:33.354977 containerd[1468]: time="2025-02-13T15:32:33.354875856Z" level=info msg="StopPodSandbox for \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\" returns successfully" Feb 13 15:32:33.356199 containerd[1468]: time="2025-02-13T15:32:33.356166200Z" level=info msg="StopPodSandbox for \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\"" Feb 13 15:32:33.356286 containerd[1468]: time="2025-02-13T15:32:33.356263112Z" level=info msg="TearDown network for sandbox \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\" successfully" Feb 13 15:32:33.356286 containerd[1468]: time="2025-02-13T15:32:33.356279193Z" level=info msg="StopPodSandbox for \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\" returns successfully" Feb 13 15:32:33.356649 kubelet[2663]: I0213 15:32:33.356621 2663 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee" Feb 13 15:32:33.357088 containerd[1468]: time="2025-02-13T15:32:33.357062353Z" level=info msg="StopPodSandbox for \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\"" Feb 13 15:32:33.357170 containerd[1468]: time="2025-02-13T15:32:33.357145108Z" level=info msg="TearDown network for sandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\" successfully" Feb 13 15:32:33.357170 containerd[1468]: time="2025-02-13T15:32:33.357166009Z" level=info msg="StopPodSandbox for \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\" returns successfully" Feb 13 15:32:33.357223 containerd[1468]: time="2025-02-13T15:32:33.357145229Z" level=info msg="StopPodSandbox for \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\"" Feb 13 15:32:33.357364 containerd[1468]: time="2025-02-13T15:32:33.357342129Z" level=info msg="Ensure that sandbox b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee in task-service has been cleanup successfully" Feb 13 15:32:33.357794 containerd[1468]: time="2025-02-13T15:32:33.357539250Z" level=info msg="TearDown network for sandbox \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\" successfully" Feb 13 15:32:33.357794 containerd[1468]: time="2025-02-13T15:32:33.357555510Z" level=info msg="StopPodSandbox for \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\" returns successfully" Feb 13 15:32:33.357794 containerd[1468]: time="2025-02-13T15:32:33.357635791Z" level=info msg="StopPodSandbox for \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\"" Feb 13 15:32:33.357794 containerd[1468]: time="2025-02-13T15:32:33.357733985Z" level=info msg="TearDown network for sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\" successfully" Feb 13 15:32:33.357794 containerd[1468]: time="2025-02-13T15:32:33.357745998Z" level=info msg="StopPodSandbox for \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\" returns successfully" Feb 13 15:32:33.358044 containerd[1468]: time="2025-02-13T15:32:33.357811100Z" level=info msg="StopPodSandbox for \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\"" Feb 13 15:32:33.358044 containerd[1468]: time="2025-02-13T15:32:33.357885790Z" level=info msg="TearDown network for sandbox \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\" successfully" Feb 13 15:32:33.358044 containerd[1468]: time="2025-02-13T15:32:33.357896781Z" level=info msg="StopPodSandbox for \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\" returns successfully" Feb 13 15:32:33.358362 containerd[1468]: time="2025-02-13T15:32:33.358269241Z" level=info msg="StopPodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\"" Feb 13 15:32:33.358486 containerd[1468]: time="2025-02-13T15:32:33.358467874Z" level=info msg="TearDown network for sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" successfully" Feb 13 15:32:33.358486 containerd[1468]: time="2025-02-13T15:32:33.358482892Z" level=info msg="StopPodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" returns successfully" Feb 13 15:32:33.358542 containerd[1468]: time="2025-02-13T15:32:33.358323252Z" level=info msg="StopPodSandbox for \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\"" Feb 13 15:32:33.358614 containerd[1468]: time="2025-02-13T15:32:33.358594652Z" level=info msg="TearDown network for sandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\" successfully" Feb 13 15:32:33.358614 containerd[1468]: time="2025-02-13T15:32:33.358608398Z" level=info msg="StopPodSandbox for \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\" returns successfully" Feb 13 15:32:33.358894 containerd[1468]: time="2025-02-13T15:32:33.358870941Z" level=info msg="StopPodSandbox for \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\"" Feb 13 15:32:33.359121 containerd[1468]: time="2025-02-13T15:32:33.359048886Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\"" Feb 13 15:32:33.359200 containerd[1468]: time="2025-02-13T15:32:33.359161557Z" level=info msg="TearDown network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" successfully" Feb 13 15:32:33.359200 containerd[1468]: time="2025-02-13T15:32:33.359174591Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" returns successfully" Feb 13 15:32:33.359259 containerd[1468]: time="2025-02-13T15:32:33.359242960Z" level=info msg="TearDown network for sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\" successfully" Feb 13 15:32:33.359259 containerd[1468]: time="2025-02-13T15:32:33.359256546Z" level=info msg="StopPodSandbox for \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\" returns successfully" Feb 13 15:32:33.359807 containerd[1468]: time="2025-02-13T15:32:33.359437125Z" level=info msg="StopPodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\"" Feb 13 15:32:33.359807 containerd[1468]: time="2025-02-13T15:32:33.359513869Z" level=info msg="TearDown network for sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" successfully" Feb 13 15:32:33.359807 containerd[1468]: time="2025-02-13T15:32:33.359523166Z" level=info msg="StopPodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" returns successfully" Feb 13 15:32:33.359807 containerd[1468]: time="2025-02-13T15:32:33.359580054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:6,}" Feb 13 15:32:33.360149 containerd[1468]: time="2025-02-13T15:32:33.360131018Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\"" Feb 13 15:32:33.360220 containerd[1468]: time="2025-02-13T15:32:33.360206691Z" level=info msg="TearDown network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" successfully" Feb 13 15:32:33.360220 containerd[1468]: time="2025-02-13T15:32:33.360217641Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" returns successfully" Feb 13 15:32:33.360561 containerd[1468]: time="2025-02-13T15:32:33.360532924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:32:33.482301 systemd[1]: run-netns-cni\x2d19fecd67\x2dacf0\x2d0152\x2dd0cf\x2d0075ed78ee0a.mount: Deactivated successfully. Feb 13 15:32:33.482486 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60-shm.mount: Deactivated successfully. Feb 13 15:32:33.482562 systemd[1]: run-netns-cni\x2d3c029917\x2da787\x2d7ecc\x2d902f\x2d6f26f4472fb9.mount: Deactivated successfully. Feb 13 15:32:33.482630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b-shm.mount: Deactivated successfully. Feb 13 15:32:33.482701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3251142197.mount: Deactivated successfully. Feb 13 15:32:33.587804 systemd-networkd[1403]: calicaa8929c17e: Link UP Feb 13 15:32:33.588612 systemd-networkd[1403]: calicaa8929c17e: Gained carrier Feb 13 15:32:33.606848 systemd-networkd[1403]: cali287cd7afa64: Link UP Feb 13 15:32:33.607162 systemd-networkd[1403]: cali287cd7afa64: Gained carrier Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.438 [INFO][4886] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.453 [INFO][4886] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xbpjc-eth0 csi-node-driver- calico-system 757af110-1c95-44e4-a60e-64cc5c9b9a1e 606 0 2025-02-13 15:32:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xbpjc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicaa8929c17e [] []}} ContainerID="d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" Namespace="calico-system" Pod="csi-node-driver-xbpjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xbpjc-" Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.454 [INFO][4886] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" Namespace="calico-system" Pod="csi-node-driver-xbpjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xbpjc-eth0" Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.515 [INFO][4929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" HandleID="k8s-pod-network.d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" Workload="localhost-k8s-csi--node--driver--xbpjc-eth0" Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.525 [INFO][4929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" HandleID="k8s-pod-network.d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" Workload="localhost-k8s-csi--node--driver--xbpjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xbpjc", "timestamp":"2025-02-13 15:32:33.51574036 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.525 [INFO][4929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.525 [INFO][4929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.525 [INFO][4929] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.528 [INFO][4929] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" host="localhost" Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.532 [INFO][4929] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.536 [INFO][4929] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.537 [INFO][4929] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.539 [INFO][4929] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.539 [INFO][4929] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" host="localhost" Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.540 [INFO][4929] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7 Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.564 [INFO][4929] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" host="localhost" Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.571 [INFO][4929] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" host="localhost" Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.571 [INFO][4929] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" host="localhost" Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.571 [INFO][4929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:32:33.607557 containerd[1468]: 2025-02-13 15:32:33.571 [INFO][4929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" HandleID="k8s-pod-network.d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" Workload="localhost-k8s-csi--node--driver--xbpjc-eth0" Feb 13 15:32:33.609308 containerd[1468]: 2025-02-13 15:32:33.576 [INFO][4886] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" Namespace="calico-system" Pod="csi-node-driver-xbpjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xbpjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xbpjc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"757af110-1c95-44e4-a60e-64cc5c9b9a1e", ResourceVersion:"606", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xbpjc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicaa8929c17e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:33.609308 containerd[1468]: 2025-02-13 15:32:33.577 [INFO][4886] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" Namespace="calico-system" Pod="csi-node-driver-xbpjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xbpjc-eth0" Feb 13 15:32:33.609308 containerd[1468]: 2025-02-13 15:32:33.577 [INFO][4886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicaa8929c17e ContainerID="d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" Namespace="calico-system" Pod="csi-node-driver-xbpjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xbpjc-eth0" Feb 13 15:32:33.609308 containerd[1468]: 2025-02-13 15:32:33.588 [INFO][4886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" Namespace="calico-system" Pod="csi-node-driver-xbpjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xbpjc-eth0" Feb 13 15:32:33.609308 containerd[1468]: 2025-02-13 15:32:33.589 [INFO][4886] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" Namespace="calico-system" Pod="csi-node-driver-xbpjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xbpjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xbpjc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"757af110-1c95-44e4-a60e-64cc5c9b9a1e", ResourceVersion:"606", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7", Pod:"csi-node-driver-xbpjc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicaa8929c17e", MAC:"1a:46:f6:db:76:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:33.609308 containerd[1468]: 2025-02-13 15:32:33.601 [INFO][4886] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7" Namespace="calico-system" Pod="csi-node-driver-xbpjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xbpjc-eth0" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.383 [INFO][4846] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.399 [INFO][4846] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0 coredns-7db6d8ff4d- kube-system 1d0e938c-1376-4e25-a332-b48365cd1ce4 749 0 2025-02-13 15:32:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-5jfrp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali287cd7afa64 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5jfrp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5jfrp-" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.399 [INFO][4846] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5jfrp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.515 [INFO][4873] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" HandleID="k8s-pod-network.c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" Workload="localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.527 [INFO][4873] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" HandleID="k8s-pod-network.c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" Workload="localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000390bc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-5jfrp", "timestamp":"2025-02-13 15:32:33.515590718 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.527 [INFO][4873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.571 [INFO][4873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.571 [INFO][4873] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.572 [INFO][4873] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" host="localhost" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.576 [INFO][4873] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.581 [INFO][4873] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.582 [INFO][4873] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.584 [INFO][4873] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.584 [INFO][4873] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" host="localhost" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.585 [INFO][4873] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.594 [INFO][4873] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" host="localhost" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.602 [INFO][4873] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" host="localhost" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.602 [INFO][4873] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" host="localhost" Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.602 [INFO][4873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:32:33.622532 containerd[1468]: 2025-02-13 15:32:33.602 [INFO][4873] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" HandleID="k8s-pod-network.c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" Workload="localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0" Feb 13 15:32:33.623295 containerd[1468]: 2025-02-13 15:32:33.605 [INFO][4846] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5jfrp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1d0e938c-1376-4e25-a332-b48365cd1ce4", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-5jfrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali287cd7afa64", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:33.623295 containerd[1468]: 2025-02-13 15:32:33.605 [INFO][4846] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5jfrp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0" Feb 13 15:32:33.623295 containerd[1468]: 2025-02-13 15:32:33.605 [INFO][4846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali287cd7afa64 ContainerID="c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5jfrp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0" Feb 13 15:32:33.623295 containerd[1468]: 2025-02-13 15:32:33.606 [INFO][4846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5jfrp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0" Feb 13 15:32:33.623295 containerd[1468]: 2025-02-13 15:32:33.607 [INFO][4846] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5jfrp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1d0e938c-1376-4e25-a332-b48365cd1ce4", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf", Pod:"coredns-7db6d8ff4d-5jfrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali287cd7afa64", MAC:"8a:19:cc:34:1a:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:33.623295 containerd[1468]: 2025-02-13 15:32:33.619 [INFO][4846] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5jfrp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5jfrp-eth0" Feb 13 15:32:33.650874 systemd-networkd[1403]: calie78539bbc12: Link UP Feb 13 15:32:33.651087 systemd-networkd[1403]: calie78539bbc12: Gained carrier Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.434 [INFO][4876] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.449 [INFO][4876] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0 calico-apiserver-5bd68f4995- calico-apiserver 69e7b0a9-0728-4031-b89b-c2766dc8da1b 751 0 2025-02-13 15:32:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bd68f4995 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bd68f4995-bdz7b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie78539bbc12 [] []}} ContainerID="80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-bdz7b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-" Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.449 [INFO][4876] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-bdz7b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0" Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.515 [INFO][4924] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" HandleID="k8s-pod-network.80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" Workload="localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0" Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.528 [INFO][4924] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" HandleID="k8s-pod-network.80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" Workload="localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f55b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bd68f4995-bdz7b", "timestamp":"2025-02-13 15:32:33.515728548 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.528 [INFO][4924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.602 [INFO][4924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.602 [INFO][4924] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.604 [INFO][4924] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" host="localhost" Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.612 [INFO][4924] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.620 [INFO][4924] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.624 [INFO][4924] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.626 [INFO][4924] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.626 [INFO][4924] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" host="localhost" Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.627 [INFO][4924] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.633 [INFO][4924] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" host="localhost" Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.641 [INFO][4924] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" host="localhost" Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.641 [INFO][4924] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" host="localhost" Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.641 [INFO][4924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:32:33.669488 containerd[1468]: 2025-02-13 15:32:33.641 [INFO][4924] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" HandleID="k8s-pod-network.80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" Workload="localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0" Feb 13 15:32:33.670382 containerd[1468]: 2025-02-13 15:32:33.648 [INFO][4876] cni-plugin/k8s.go 386: Populated endpoint ContainerID="80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-bdz7b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0", GenerateName:"calico-apiserver-5bd68f4995-", Namespace:"calico-apiserver", SelfLink:"", UID:"69e7b0a9-0728-4031-b89b-c2766dc8da1b", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd68f4995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bd68f4995-bdz7b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie78539bbc12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:33.670382 containerd[1468]: 2025-02-13 15:32:33.648 [INFO][4876] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-bdz7b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0" Feb 13 15:32:33.670382 containerd[1468]: 2025-02-13 15:32:33.648 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie78539bbc12 ContainerID="80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-bdz7b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0" Feb 13 15:32:33.670382 containerd[1468]: 2025-02-13 15:32:33.651 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-bdz7b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0" Feb 13 15:32:33.670382 containerd[1468]: 2025-02-13 15:32:33.651 [INFO][4876] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-bdz7b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0", GenerateName:"calico-apiserver-5bd68f4995-", Namespace:"calico-apiserver", SelfLink:"", UID:"69e7b0a9-0728-4031-b89b-c2766dc8da1b", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd68f4995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d", Pod:"calico-apiserver-5bd68f4995-bdz7b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie78539bbc12", MAC:"52:3e:23:9e:d0:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:33.670382 containerd[1468]: 2025-02-13 15:32:33.660 [INFO][4876] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-bdz7b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--bdz7b-eth0" Feb 13 15:32:33.678564 containerd[1468]: time="2025-02-13T15:32:33.676884881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:33.678564 containerd[1468]: time="2025-02-13T15:32:33.676957827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:33.678564 containerd[1468]: time="2025-02-13T15:32:33.676974559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:33.678564 containerd[1468]: time="2025-02-13T15:32:33.677068765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:33.683333 containerd[1468]: time="2025-02-13T15:32:33.682768003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:33.683333 containerd[1468]: time="2025-02-13T15:32:33.682846560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:33.683333 containerd[1468]: time="2025-02-13T15:32:33.682865646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:33.687362 containerd[1468]: time="2025-02-13T15:32:33.685671357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:33.701803 systemd-networkd[1403]: cali77dd18c2107: Link UP Feb 13 15:32:33.702401 systemd-networkd[1403]: cali77dd18c2107: Gained carrier Feb 13 15:32:33.723741 systemd[1]: run-containerd-runc-k8s.io-c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf-runc.pQEEyF.mount: Deactivated successfully. Feb 13 15:32:33.728179 containerd[1468]: time="2025-02-13T15:32:33.726340451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:33.728179 containerd[1468]: time="2025-02-13T15:32:33.726425792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:33.728179 containerd[1468]: time="2025-02-13T15:32:33.726471728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:33.728179 containerd[1468]: time="2025-02-13T15:32:33.726542942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.391 [INFO][4857] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.401 [INFO][4857] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0 coredns-7db6d8ff4d- kube-system f93cafd2-d36c-4948-9ac9-90d542cbe206 750 0 2025-02-13 15:32:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-dczc4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali77dd18c2107 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dczc4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dczc4-" Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.401 [INFO][4857] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dczc4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0" Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.515 [INFO][4875] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" HandleID="k8s-pod-network.91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" Workload="localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0" Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.530 [INFO][4875] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" HandleID="k8s-pod-network.91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" Workload="localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003754e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-dczc4", "timestamp":"2025-02-13 15:32:33.515590759 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.530 [INFO][4875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.641 [INFO][4875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.641 [INFO][4875] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.644 [INFO][4875] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" host="localhost" Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.652 [INFO][4875] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.658 [INFO][4875] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.663 [INFO][4875] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.665 [INFO][4875] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.665 [INFO][4875] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" host="localhost" Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.667 [INFO][4875] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56 Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.676 [INFO][4875] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" host="localhost" Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.687 [INFO][4875] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" host="localhost" Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.687 [INFO][4875] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" host="localhost" Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.687 [INFO][4875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:32:33.732507 containerd[1468]: 2025-02-13 15:32:33.687 [INFO][4875] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" HandleID="k8s-pod-network.91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" Workload="localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0" Feb 13 15:32:33.733469 containerd[1468]: 2025-02-13 15:32:33.698 [INFO][4857] cni-plugin/k8s.go 386: Populated endpoint ContainerID="91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dczc4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f93cafd2-d36c-4948-9ac9-90d542cbe206", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-dczc4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77dd18c2107", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:33.733469 containerd[1468]: 2025-02-13 15:32:33.698 [INFO][4857] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dczc4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0" Feb 13 15:32:33.733469 containerd[1468]: 2025-02-13 15:32:33.698 [INFO][4857] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77dd18c2107 ContainerID="91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dczc4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0" Feb 13 15:32:33.733469 containerd[1468]: 2025-02-13 15:32:33.703 [INFO][4857] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dczc4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0" Feb 13 15:32:33.733469 containerd[1468]: 2025-02-13 15:32:33.703 [INFO][4857] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dczc4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f93cafd2-d36c-4948-9ac9-90d542cbe206", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56", Pod:"coredns-7db6d8ff4d-dczc4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77dd18c2107", MAC:"ba:6c:9d:0a:03:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:33.733469 containerd[1468]: 2025-02-13 15:32:33.715 [INFO][4857] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dczc4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--dczc4-eth0" Feb 13 15:32:33.736041 systemd[1]: Started cri-containerd-c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf.scope - libcontainer container c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf. Feb 13 15:32:33.737838 systemd[1]: Started cri-containerd-d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7.scope - libcontainer container d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7. Feb 13 15:32:33.774209 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:32:33.783030 systemd-networkd[1403]: calib27c4c8fc6e: Link UP Feb 13 15:32:33.783867 systemd-networkd[1403]: calib27c4c8fc6e: Gained carrier Feb 13 15:32:33.787026 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:32:33.801093 systemd[1]: Started cri-containerd-80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d.scope - libcontainer container 80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d. Feb 13 15:32:33.810343 containerd[1468]: time="2025-02-13T15:32:33.806801022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:33.810343 containerd[1468]: time="2025-02-13T15:32:33.806976000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:33.810343 containerd[1468]: time="2025-02-13T15:32:33.807100094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:33.810343 containerd[1468]: time="2025-02-13T15:32:33.807460771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.495 [INFO][4912] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.527 [INFO][4912] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0 calico-kube-controllers-7466fbdd8f- calico-system 6a4955fe-2fb1-4e1b-a558-1a75615b1f9d 752 0 2025-02-13 15:32:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7466fbdd8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7466fbdd8f-crh5j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib27c4c8fc6e [] []}} ContainerID="ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" Namespace="calico-system" Pod="calico-kube-controllers-7466fbdd8f-crh5j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-" Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.527 [INFO][4912] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" Namespace="calico-system" Pod="calico-kube-controllers-7466fbdd8f-crh5j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0" Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.556 [INFO][4947] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" HandleID="k8s-pod-network.ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" Workload="localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0" Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.566 [INFO][4947] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" HandleID="k8s-pod-network.ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" Workload="localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000281360), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7466fbdd8f-crh5j", "timestamp":"2025-02-13 15:32:33.556209339 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.566 [INFO][4947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.689 [INFO][4947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.690 [INFO][4947] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.692 [INFO][4947] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" host="localhost" Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.698 [INFO][4947] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.703 [INFO][4947] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.708 [INFO][4947] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.715 [INFO][4947] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.716 [INFO][4947] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" host="localhost" Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.719 [INFO][4947] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0 Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.727 [INFO][4947] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" host="localhost" Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.737 [INFO][4947] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" host="localhost" Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.738 [INFO][4947] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" host="localhost" Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.738 [INFO][4947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:32:33.833085 containerd[1468]: 2025-02-13 15:32:33.738 [INFO][4947] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" HandleID="k8s-pod-network.ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" Workload="localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0" Feb 13 15:32:33.834009 containerd[1468]: 2025-02-13 15:32:33.763 [INFO][4912] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" Namespace="calico-system" Pod="calico-kube-controllers-7466fbdd8f-crh5j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0", GenerateName:"calico-kube-controllers-7466fbdd8f-", Namespace:"calico-system", SelfLink:"", UID:"6a4955fe-2fb1-4e1b-a558-1a75615b1f9d", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7466fbdd8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7466fbdd8f-crh5j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib27c4c8fc6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:33.834009 containerd[1468]: 2025-02-13 15:32:33.763 [INFO][4912] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" Namespace="calico-system" Pod="calico-kube-controllers-7466fbdd8f-crh5j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0" Feb 13 15:32:33.834009 containerd[1468]: 2025-02-13 15:32:33.763 [INFO][4912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib27c4c8fc6e ContainerID="ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" Namespace="calico-system" Pod="calico-kube-controllers-7466fbdd8f-crh5j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0" Feb 13 15:32:33.834009 containerd[1468]: 2025-02-13 15:32:33.786 [INFO][4912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" Namespace="calico-system" Pod="calico-kube-controllers-7466fbdd8f-crh5j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0" Feb 13 15:32:33.834009 containerd[1468]: 2025-02-13 15:32:33.789 [INFO][4912] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" Namespace="calico-system" Pod="calico-kube-controllers-7466fbdd8f-crh5j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0", GenerateName:"calico-kube-controllers-7466fbdd8f-", Namespace:"calico-system", SelfLink:"", UID:"6a4955fe-2fb1-4e1b-a558-1a75615b1f9d", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7466fbdd8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0", Pod:"calico-kube-controllers-7466fbdd8f-crh5j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib27c4c8fc6e", MAC:"62:56:60:a4:75:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:33.834009 containerd[1468]: 2025-02-13 15:32:33.817 [INFO][4912] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0" Namespace="calico-system" Pod="calico-kube-controllers-7466fbdd8f-crh5j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7466fbdd8f--crh5j-eth0" Feb 13 15:32:33.838744 systemd[1]: Started cri-containerd-91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56.scope - libcontainer container 91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56. Feb 13 15:32:33.864003 containerd[1468]: time="2025-02-13T15:32:33.863462321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5jfrp,Uid:1d0e938c-1376-4e25-a332-b48365cd1ce4,Namespace:kube-system,Attempt:6,} returns sandbox id \"c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf\"" Feb 13 15:32:33.868779 kubelet[2663]: E0213 15:32:33.868166 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:33.868972 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:32:33.870496 containerd[1468]: time="2025-02-13T15:32:33.870468212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbpjc,Uid:757af110-1c95-44e4-a60e-64cc5c9b9a1e,Namespace:calico-system,Attempt:6,} returns sandbox id \"d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7\"" Feb 13 15:32:33.874230 systemd-networkd[1403]: cali0ce0c930e53: Link UP Feb 13 15:32:33.875961 systemd-networkd[1403]: cali0ce0c930e53: Gained carrier Feb 13 15:32:33.880260 containerd[1468]: time="2025-02-13T15:32:33.880006031Z" level=info msg="CreateContainer within sandbox \"c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:32:33.879985 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:32:33.887168 containerd[1468]: time="2025-02-13T15:32:33.887097544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.596 [INFO][4954] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.619 [INFO][4954] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0 calico-apiserver-5bd68f4995- calico-apiserver bde67ccf-8db2-4a00-9ea9-11acd382b495 746 0 2025-02-13 15:32:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bd68f4995 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bd68f4995-w55w7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ce0c930e53 [] []}} ContainerID="d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-w55w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--w55w7-" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.619 [INFO][4954] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-w55w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.657 [INFO][4989] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" HandleID="k8s-pod-network.d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" Workload="localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.669 [INFO][4989] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" HandleID="k8s-pod-network.d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" Workload="localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135ad0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bd68f4995-w55w7", "timestamp":"2025-02-13 15:32:33.65777022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.669 [INFO][4989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.738 [INFO][4989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.738 [INFO][4989] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.740 [INFO][4989] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" host="localhost" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.753 [INFO][4989] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.759 [INFO][4989] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.764 [INFO][4989] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.766 [INFO][4989] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.766 [INFO][4989] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" host="localhost" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.770 [INFO][4989] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334 Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.776 [INFO][4989] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" host="localhost" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.787 [INFO][4989] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" host="localhost" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.787 [INFO][4989] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" host="localhost" Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.789 [INFO][4989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:32:33.907498 containerd[1468]: 2025-02-13 15:32:33.789 [INFO][4989] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" HandleID="k8s-pod-network.d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" Workload="localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0" Feb 13 15:32:33.908856 containerd[1468]: 2025-02-13 15:32:33.842 [INFO][4954] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-w55w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0", GenerateName:"calico-apiserver-5bd68f4995-", Namespace:"calico-apiserver", SelfLink:"", UID:"bde67ccf-8db2-4a00-9ea9-11acd382b495", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd68f4995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bd68f4995-w55w7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ce0c930e53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:33.908856 containerd[1468]: 2025-02-13 15:32:33.843 [INFO][4954] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-w55w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0" Feb 13 15:32:33.908856 containerd[1468]: 2025-02-13 15:32:33.843 [INFO][4954] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ce0c930e53 ContainerID="d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-w55w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0" Feb 13 15:32:33.908856 containerd[1468]: 2025-02-13 15:32:33.873 [INFO][4954] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-w55w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0" Feb 13 15:32:33.908856 containerd[1468]: 2025-02-13 15:32:33.884 [INFO][4954] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-w55w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0", GenerateName:"calico-apiserver-5bd68f4995-", Namespace:"calico-apiserver", SelfLink:"", UID:"bde67ccf-8db2-4a00-9ea9-11acd382b495", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd68f4995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334", Pod:"calico-apiserver-5bd68f4995-w55w7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ce0c930e53", MAC:"1a:e1:e2:96:6b:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:33.908856 containerd[1468]: 2025-02-13 15:32:33.902 [INFO][4954] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334" Namespace="calico-apiserver" Pod="calico-apiserver-5bd68f4995-w55w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bd68f4995--w55w7-eth0" Feb 13 15:32:33.909123 containerd[1468]: time="2025-02-13T15:32:33.908214957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:33.909123 containerd[1468]: time="2025-02-13T15:32:33.908279659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:33.909123 containerd[1468]: time="2025-02-13T15:32:33.908293695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:33.911958 containerd[1468]: time="2025-02-13T15:32:33.911814078Z" level=info msg="CreateContainer within sandbox \"c251fff98689276f5545b51f3693bc5056b3c0c87dcfb3e0bbd637b7bb1326bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5eb27e4589960341d49fa98d6bb57119700d7349fb552eed4824593726c60758\"" Feb 13 15:32:33.913028 containerd[1468]: time="2025-02-13T15:32:33.912688832Z" level=info msg="StartContainer for \"5eb27e4589960341d49fa98d6bb57119700d7349fb552eed4824593726c60758\"" Feb 13 15:32:33.914145 containerd[1468]: time="2025-02-13T15:32:33.913943489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:33.930297 containerd[1468]: time="2025-02-13T15:32:33.929972362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dczc4,Uid:f93cafd2-d36c-4948-9ac9-90d542cbe206,Namespace:kube-system,Attempt:6,} returns sandbox id \"91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56\"" Feb 13 15:32:33.931268 kubelet[2663]: E0213 15:32:33.931244 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:33.936530 containerd[1468]: time="2025-02-13T15:32:33.936483655Z" level=info msg="CreateContainer within sandbox \"91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:32:33.942137 systemd[1]: Started cri-containerd-ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0.scope - libcontainer container ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0. Feb 13 15:32:33.945842 containerd[1468]: time="2025-02-13T15:32:33.945290100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-bdz7b,Uid:69e7b0a9-0728-4031-b89b-c2766dc8da1b,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d\"" Feb 13 15:32:33.951943 containerd[1468]: time="2025-02-13T15:32:33.950532499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:33.951943 containerd[1468]: time="2025-02-13T15:32:33.950585438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:33.951943 containerd[1468]: time="2025-02-13T15:32:33.950600226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:33.951943 containerd[1468]: time="2025-02-13T15:32:33.950668624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:33.966034 systemd[1]: Started cri-containerd-5eb27e4589960341d49fa98d6bb57119700d7349fb552eed4824593726c60758.scope - libcontainer container 5eb27e4589960341d49fa98d6bb57119700d7349fb552eed4824593726c60758. Feb 13 15:32:33.966755 containerd[1468]: time="2025-02-13T15:32:33.966710162Z" level=info msg="CreateContainer within sandbox \"91c06fd8013781e5421379362841c38f455ceb6dc1cec0960dfd75031003bd56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30c506b914369d7d064618e533145fcc8425bbf8ff4e2c4b41e98725dcdd4d26\"" Feb 13 15:32:33.967621 containerd[1468]: time="2025-02-13T15:32:33.967588050Z" level=info msg="StartContainer for \"30c506b914369d7d064618e533145fcc8425bbf8ff4e2c4b41e98725dcdd4d26\"" Feb 13 15:32:33.976125 systemd[1]: Started cri-containerd-d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334.scope - libcontainer container d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334. Feb 13 15:32:33.981792 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:32:34.005185 systemd[1]: Started cri-containerd-30c506b914369d7d064618e533145fcc8425bbf8ff4e2c4b41e98725dcdd4d26.scope - libcontainer container 30c506b914369d7d064618e533145fcc8425bbf8ff4e2c4b41e98725dcdd4d26. Feb 13 15:32:34.015675 containerd[1468]: time="2025-02-13T15:32:34.015525797Z" level=info msg="StartContainer for \"5eb27e4589960341d49fa98d6bb57119700d7349fb552eed4824593726c60758\" returns successfully" Feb 13 15:32:34.017380 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:32:34.020548 containerd[1468]: time="2025-02-13T15:32:34.020443445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7466fbdd8f-crh5j,Uid:6a4955fe-2fb1-4e1b-a558-1a75615b1f9d,Namespace:calico-system,Attempt:6,} returns sandbox id \"ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0\"" Feb 13 15:32:34.039881 containerd[1468]: time="2025-02-13T15:32:34.039723312Z" level=info msg="StartContainer for \"30c506b914369d7d064618e533145fcc8425bbf8ff4e2c4b41e98725dcdd4d26\" returns successfully" Feb 13 15:32:34.052600 containerd[1468]: time="2025-02-13T15:32:34.052561618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd68f4995-w55w7,Uid:bde67ccf-8db2-4a00-9ea9-11acd382b495,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334\"" Feb 13 15:32:34.197996 systemd[1]: Started sshd@11-10.0.0.113:22-10.0.0.1:45316.service - OpenSSH per-connection server daemon (10.0.0.1:45316). Feb 13 15:32:34.282822 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 45316 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:34.284660 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:34.291114 systemd-logind[1451]: New session 12 of user core. Feb 13 15:32:34.297070 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:32:34.371957 kubelet[2663]: E0213 15:32:34.371832 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:34.421959 kubelet[2663]: E0213 15:32:34.421915 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:34.472281 kubelet[2663]: I0213 15:32:34.472007 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dczc4" podStartSLOduration=25.471990638 podStartE2EDuration="25.471990638s" podCreationTimestamp="2025-02-13 15:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:34.471396842 +0000 UTC m=+40.430433437" watchObservedRunningTime="2025-02-13 15:32:34.471990638 +0000 UTC m=+40.431027233" Feb 13 15:32:34.650045 systemd-networkd[1403]: cali287cd7afa64: Gained IPv6LL Feb 13 15:32:34.799938 kernel: bpftool[5522]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:32:34.821873 kubelet[2663]: I0213 15:32:34.821664 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5jfrp" podStartSLOduration=25.821645687 podStartE2EDuration="25.821645687s" podCreationTimestamp="2025-02-13 15:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:34.821322289 +0000 UTC m=+40.780358884" watchObservedRunningTime="2025-02-13 15:32:34.821645687 +0000 UTC m=+40.780682282" Feb 13 15:32:34.868230 sshd[5466]: Connection closed by 10.0.0.1 port 45316 Feb 13 15:32:34.868572 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:34.873013 systemd[1]: sshd@11-10.0.0.113:22-10.0.0.1:45316.service: Deactivated successfully. Feb 13 15:32:34.875443 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:32:34.876347 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:32:34.877359 systemd-logind[1451]: Removed session 12. Feb 13 15:32:34.906015 systemd-networkd[1403]: calicaa8929c17e: Gained IPv6LL Feb 13 15:32:34.970068 systemd-networkd[1403]: calib27c4c8fc6e: Gained IPv6LL Feb 13 15:32:35.066803 systemd-networkd[1403]: vxlan.calico: Link UP Feb 13 15:32:35.066812 systemd-networkd[1403]: vxlan.calico: Gained carrier Feb 13 15:32:35.290117 systemd-networkd[1403]: cali0ce0c930e53: Gained IPv6LL Feb 13 15:32:35.291706 systemd-networkd[1403]: cali77dd18c2107: Gained IPv6LL Feb 13 15:32:35.413426 kubelet[2663]: E0213 15:32:35.413135 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:35.413426 kubelet[2663]: E0213 15:32:35.413316 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:35.482125 systemd-networkd[1403]: calie78539bbc12: Gained IPv6LL Feb 13 15:32:36.246556 containerd[1468]: time="2025-02-13T15:32:36.246495069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:36.247199 containerd[1468]: time="2025-02-13T15:32:36.247133188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 15:32:36.248517 containerd[1468]: time="2025-02-13T15:32:36.248492060Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:36.253034 containerd[1468]: time="2025-02-13T15:32:36.252989116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:36.253677 containerd[1468]: time="2025-02-13T15:32:36.253644978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.366506497s" Feb 13 15:32:36.253729 containerd[1468]: time="2025-02-13T15:32:36.253677950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 15:32:36.254871 containerd[1468]: time="2025-02-13T15:32:36.254843068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:32:36.256073 containerd[1468]: time="2025-02-13T15:32:36.256040046Z" level=info msg="CreateContainer within sandbox \"d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:32:36.281427 containerd[1468]: time="2025-02-13T15:32:36.281334361Z" level=info msg="CreateContainer within sandbox \"d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e5321b125b374d617ff17080c6fbf5d652b967a8cc7d5a59aa791231ba989945\"" Feb 13 15:32:36.282005 containerd[1468]: time="2025-02-13T15:32:36.281972760Z" level=info msg="StartContainer for \"e5321b125b374d617ff17080c6fbf5d652b967a8cc7d5a59aa791231ba989945\"" Feb 13 15:32:36.321097 systemd[1]: Started cri-containerd-e5321b125b374d617ff17080c6fbf5d652b967a8cc7d5a59aa791231ba989945.scope - libcontainer container e5321b125b374d617ff17080c6fbf5d652b967a8cc7d5a59aa791231ba989945. Feb 13 15:32:36.352980 containerd[1468]: time="2025-02-13T15:32:36.352497904Z" level=info msg="StartContainer for \"e5321b125b374d617ff17080c6fbf5d652b967a8cc7d5a59aa791231ba989945\" returns successfully" Feb 13 15:32:36.417648 kubelet[2663]: E0213 15:32:36.417606 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:36.954038 systemd-networkd[1403]: vxlan.calico: Gained IPv6LL Feb 13 15:32:38.871317 containerd[1468]: time="2025-02-13T15:32:38.871267486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:38.872085 containerd[1468]: time="2025-02-13T15:32:38.872052049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 15:32:38.873314 containerd[1468]: time="2025-02-13T15:32:38.873280967Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:38.875544 containerd[1468]: time="2025-02-13T15:32:38.875481710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:38.876177 containerd[1468]: time="2025-02-13T15:32:38.876142570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.621263604s" Feb 13 15:32:38.876222 containerd[1468]: time="2025-02-13T15:32:38.876180471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 15:32:38.877197 containerd[1468]: time="2025-02-13T15:32:38.877167926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:32:38.878177 containerd[1468]: time="2025-02-13T15:32:38.878145201Z" level=info msg="CreateContainer within sandbox \"80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:32:38.890868 containerd[1468]: time="2025-02-13T15:32:38.890834609Z" level=info msg="CreateContainer within sandbox \"80e0bbc3d4de21b39b3f60579ff40cada6563d3c7aec416aa9e9cc145c13834d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"695856fb6ca298925f871f95aff00dac1d97b253ed909d62efb5a3feec0da00f\"" Feb 13 15:32:38.891381 containerd[1468]: time="2025-02-13T15:32:38.891348684Z" level=info msg="StartContainer for \"695856fb6ca298925f871f95aff00dac1d97b253ed909d62efb5a3feec0da00f\"" Feb 13 15:32:38.921052 systemd[1]: Started cri-containerd-695856fb6ca298925f871f95aff00dac1d97b253ed909d62efb5a3feec0da00f.scope - libcontainer container 695856fb6ca298925f871f95aff00dac1d97b253ed909d62efb5a3feec0da00f. Feb 13 15:32:38.962815 containerd[1468]: time="2025-02-13T15:32:38.962775958Z" level=info msg="StartContainer for \"695856fb6ca298925f871f95aff00dac1d97b253ed909d62efb5a3feec0da00f\" returns successfully" Feb 13 15:32:39.434888 kubelet[2663]: I0213 15:32:39.434837 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bd68f4995-bdz7b" podStartSLOduration=18.504315096 podStartE2EDuration="23.434818961s" podCreationTimestamp="2025-02-13 15:32:16 +0000 UTC" firstStartedPulling="2025-02-13 15:32:33.94641336 +0000 UTC m=+39.905449955" lastFinishedPulling="2025-02-13 15:32:38.876917225 +0000 UTC m=+44.835953820" observedRunningTime="2025-02-13 15:32:39.434527012 +0000 UTC m=+45.393563607" watchObservedRunningTime="2025-02-13 15:32:39.434818961 +0000 UTC m=+45.393855556" Feb 13 15:32:39.521772 kubelet[2663]: I0213 15:32:39.521727 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:32:39.522624 kubelet[2663]: E0213 15:32:39.522551 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:39.883273 systemd[1]: Started sshd@12-10.0.0.113:22-10.0.0.1:57504.service - OpenSSH per-connection server daemon (10.0.0.1:57504). Feb 13 15:32:39.987585 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 57504 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:39.989453 sshd-session[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:39.994563 systemd-logind[1451]: New session 13 of user core. Feb 13 15:32:40.007196 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:32:40.138992 sshd[5750]: Connection closed by 10.0.0.1 port 57504 Feb 13 15:32:40.140506 sshd-session[5748]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:40.148139 systemd[1]: sshd@12-10.0.0.113:22-10.0.0.1:57504.service: Deactivated successfully. Feb 13 15:32:40.150228 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:32:40.151890 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:32:40.167205 systemd[1]: Started sshd@13-10.0.0.113:22-10.0.0.1:57508.service - OpenSSH per-connection server daemon (10.0.0.1:57508). Feb 13 15:32:40.168146 systemd-logind[1451]: Removed session 13. Feb 13 15:32:40.201689 sshd[5763]: Accepted publickey for core from 10.0.0.1 port 57508 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:40.203091 sshd-session[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:40.207793 systemd-logind[1451]: New session 14 of user core. Feb 13 15:32:40.218140 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:32:40.360689 sshd[5765]: Connection closed by 10.0.0.1 port 57508 Feb 13 15:32:40.361178 sshd-session[5763]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:40.373282 systemd[1]: sshd@13-10.0.0.113:22-10.0.0.1:57508.service: Deactivated successfully. Feb 13 15:32:40.375961 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:32:40.378694 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:32:40.389081 systemd[1]: Started sshd@14-10.0.0.113:22-10.0.0.1:57510.service - OpenSSH per-connection server daemon (10.0.0.1:57510). Feb 13 15:32:40.391023 systemd-logind[1451]: Removed session 14. Feb 13 15:32:40.426668 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 57510 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:40.428408 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:40.428699 kubelet[2663]: I0213 15:32:40.428675 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:32:40.429513 kubelet[2663]: E0213 15:32:40.429484 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:32:40.432960 systemd-logind[1451]: New session 15 of user core. Feb 13 15:32:40.442059 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:32:40.616417 sshd[5786]: Connection closed by 10.0.0.1 port 57510 Feb 13 15:32:40.616811 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:40.620468 systemd[1]: sshd@14-10.0.0.113:22-10.0.0.1:57510.service: Deactivated successfully. Feb 13 15:32:40.622531 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:32:40.624012 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:32:40.625110 systemd-logind[1451]: Removed session 15. Feb 13 15:32:41.284061 containerd[1468]: time="2025-02-13T15:32:41.283995900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:41.285180 containerd[1468]: time="2025-02-13T15:32:41.285101946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 15:32:41.286250 containerd[1468]: time="2025-02-13T15:32:41.286197924Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:41.288732 containerd[1468]: time="2025-02-13T15:32:41.288681346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:41.289409 containerd[1468]: time="2025-02-13T15:32:41.289302762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.412108416s" Feb 13 15:32:41.289409 containerd[1468]: time="2025-02-13T15:32:41.289336015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 15:32:41.290490 containerd[1468]: time="2025-02-13T15:32:41.290452801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:32:41.299173 containerd[1468]: time="2025-02-13T15:32:41.299128759Z" level=info msg="CreateContainer within sandbox \"ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:32:41.313753 containerd[1468]: time="2025-02-13T15:32:41.313704343Z" level=info msg="CreateContainer within sandbox \"ad6c1b5a6a8f568954fe07c26a614d0b727918e7b6994561c5805659dba59ee0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a8754004b6c53107755f311f104fd71d8d0ef0caea6d74c7d1ce5138c6e241ed\"" Feb 13 15:32:41.314285 containerd[1468]: time="2025-02-13T15:32:41.314250047Z" level=info msg="StartContainer for \"a8754004b6c53107755f311f104fd71d8d0ef0caea6d74c7d1ce5138c6e241ed\"" Feb 13 15:32:41.352022 systemd[1]: Started cri-containerd-a8754004b6c53107755f311f104fd71d8d0ef0caea6d74c7d1ce5138c6e241ed.scope - libcontainer container a8754004b6c53107755f311f104fd71d8d0ef0caea6d74c7d1ce5138c6e241ed. Feb 13 15:32:41.806680 containerd[1468]: time="2025-02-13T15:32:41.806355980Z" level=info msg="StartContainer for \"a8754004b6c53107755f311f104fd71d8d0ef0caea6d74c7d1ce5138c6e241ed\" returns successfully" Feb 13 15:32:42.020711 containerd[1468]: time="2025-02-13T15:32:42.020655070Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:42.022271 containerd[1468]: time="2025-02-13T15:32:42.022222362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:32:42.024189 containerd[1468]: time="2025-02-13T15:32:42.024159239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 733.664047ms" Feb 13 15:32:42.024189 containerd[1468]: time="2025-02-13T15:32:42.024188514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 15:32:42.025225 containerd[1468]: time="2025-02-13T15:32:42.025028860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:32:42.026215 containerd[1468]: time="2025-02-13T15:32:42.026189831Z" level=info msg="CreateContainer within sandbox \"d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:32:42.043127 containerd[1468]: time="2025-02-13T15:32:42.043078043Z" level=info msg="CreateContainer within sandbox \"d11f7bbc1464006adde804bab99ac8e079eae77d334c94a5c541304956523334\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c730fa672c917f0df6c48a9a117819e83fb70f6ff07ce2a759d85785182fdd03\"" Feb 13 15:32:42.044127 containerd[1468]: time="2025-02-13T15:32:42.043993502Z" level=info msg="StartContainer for \"c730fa672c917f0df6c48a9a117819e83fb70f6ff07ce2a759d85785182fdd03\"" Feb 13 15:32:42.072048 systemd[1]: Started cri-containerd-c730fa672c917f0df6c48a9a117819e83fb70f6ff07ce2a759d85785182fdd03.scope - libcontainer container c730fa672c917f0df6c48a9a117819e83fb70f6ff07ce2a759d85785182fdd03. Feb 13 15:32:42.114186 containerd[1468]: time="2025-02-13T15:32:42.114145533Z" level=info msg="StartContainer for \"c730fa672c917f0df6c48a9a117819e83fb70f6ff07ce2a759d85785182fdd03\" returns successfully" Feb 13 15:32:42.826109 kubelet[2663]: I0213 15:32:42.826016 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7466fbdd8f-crh5j" podStartSLOduration=19.560105493000002 podStartE2EDuration="26.825997445s" podCreationTimestamp="2025-02-13 15:32:16 +0000 UTC" firstStartedPulling="2025-02-13 15:32:34.024372135 +0000 UTC m=+39.983408730" lastFinishedPulling="2025-02-13 15:32:41.290264087 +0000 UTC m=+47.249300682" observedRunningTime="2025-02-13 15:32:42.825044275 +0000 UTC m=+48.784080870" watchObservedRunningTime="2025-02-13 15:32:42.825997445 +0000 UTC m=+48.785034030" Feb 13 15:32:42.872560 kubelet[2663]: I0213 15:32:42.872094 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bd68f4995-w55w7" podStartSLOduration=18.90200337 podStartE2EDuration="26.872074011s" podCreationTimestamp="2025-02-13 15:32:16 +0000 UTC" firstStartedPulling="2025-02-13 15:32:34.054802678 +0000 UTC m=+40.013839273" lastFinishedPulling="2025-02-13 15:32:42.024873319 +0000 UTC m=+47.983909914" observedRunningTime="2025-02-13 15:32:42.838468707 +0000 UTC m=+48.797505322" watchObservedRunningTime="2025-02-13 15:32:42.872074011 +0000 UTC m=+48.831110616" Feb 13 15:32:43.685736 containerd[1468]: time="2025-02-13T15:32:43.685677854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:43.686543 containerd[1468]: time="2025-02-13T15:32:43.686457628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 15:32:43.687704 containerd[1468]: time="2025-02-13T15:32:43.687655917Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:43.690473 containerd[1468]: time="2025-02-13T15:32:43.690437458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:43.691183 containerd[1468]: time="2025-02-13T15:32:43.691140768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.666079237s" Feb 13 15:32:43.691254 containerd[1468]: time="2025-02-13T15:32:43.691183008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 15:32:43.693420 containerd[1468]: time="2025-02-13T15:32:43.693390802Z" level=info msg="CreateContainer within sandbox \"d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:32:43.707667 containerd[1468]: time="2025-02-13T15:32:43.707611325Z" level=info msg="CreateContainer within sandbox \"d8230e5c39eb717658260215abd2750d6d31bb2ade19bf328d1e3ec8c07f75f7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c6bc5e7ae9d81b85bf750f451bf41f8aa4d9367e48d4c09849b5884d56dace87\"" Feb 13 15:32:43.708657 containerd[1468]: time="2025-02-13T15:32:43.708599461Z" level=info msg="StartContainer for \"c6bc5e7ae9d81b85bf750f451bf41f8aa4d9367e48d4c09849b5884d56dace87\"" Feb 13 15:32:43.743036 systemd[1]: Started cri-containerd-c6bc5e7ae9d81b85bf750f451bf41f8aa4d9367e48d4c09849b5884d56dace87.scope - libcontainer container c6bc5e7ae9d81b85bf750f451bf41f8aa4d9367e48d4c09849b5884d56dace87. Feb 13 15:32:43.961440 containerd[1468]: time="2025-02-13T15:32:43.961308712Z" level=info msg="StartContainer for \"c6bc5e7ae9d81b85bf750f451bf41f8aa4d9367e48d4c09849b5884d56dace87\" returns successfully" Feb 13 15:32:44.000705 kubelet[2663]: I0213 15:32:44.000146 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xbpjc" podStartSLOduration=18.192498198 podStartE2EDuration="28.00012295s" podCreationTimestamp="2025-02-13 15:32:16 +0000 UTC" firstStartedPulling="2025-02-13 15:32:33.884298406 +0000 UTC m=+39.843335001" lastFinishedPulling="2025-02-13 15:32:43.691923158 +0000 UTC m=+49.650959753" observedRunningTime="2025-02-13 15:32:44.000050074 +0000 UTC m=+49.959086669" watchObservedRunningTime="2025-02-13 15:32:44.00012295 +0000 UTC m=+49.959159545" Feb 13 15:32:44.187611 kubelet[2663]: I0213 15:32:44.187565 2663 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:32:44.187611 kubelet[2663]: I0213 15:32:44.187613 2663 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:32:45.253560 kubelet[2663]: I0213 15:32:45.253499 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:32:45.627425 systemd[1]: Started sshd@15-10.0.0.113:22-10.0.0.1:38370.service - OpenSSH per-connection server daemon (10.0.0.1:38370). Feb 13 15:32:45.691359 sshd[5968]: Accepted publickey for core from 10.0.0.1 port 38370 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:45.693035 sshd-session[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:45.697029 systemd-logind[1451]: New session 16 of user core. Feb 13 15:32:45.707015 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:32:45.840964 sshd[5970]: Connection closed by 10.0.0.1 port 38370 Feb 13 15:32:45.841377 sshd-session[5968]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:45.845975 systemd[1]: sshd@15-10.0.0.113:22-10.0.0.1:38370.service: Deactivated successfully. Feb 13 15:32:45.848163 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:32:45.849073 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:32:45.850454 systemd-logind[1451]: Removed session 16. Feb 13 15:32:50.852992 systemd[1]: Started sshd@16-10.0.0.113:22-10.0.0.1:38380.service - OpenSSH per-connection server daemon (10.0.0.1:38380). Feb 13 15:32:50.904874 sshd[5983]: Accepted publickey for core from 10.0.0.1 port 38380 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:50.906250 sshd-session[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:50.909932 systemd-logind[1451]: New session 17 of user core. Feb 13 15:32:50.922064 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:32:51.031385 sshd[5985]: Connection closed by 10.0.0.1 port 38380 Feb 13 15:32:51.031727 sshd-session[5983]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:51.035208 systemd[1]: sshd@16-10.0.0.113:22-10.0.0.1:38380.service: Deactivated successfully. Feb 13 15:32:51.037025 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:32:51.037544 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:32:51.038444 systemd-logind[1451]: Removed session 17. Feb 13 15:32:54.112462 containerd[1468]: time="2025-02-13T15:32:54.112424412Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\"" Feb 13 15:32:54.112929 containerd[1468]: time="2025-02-13T15:32:54.112520553Z" level=info msg="TearDown network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" successfully" Feb 13 15:32:54.112929 containerd[1468]: time="2025-02-13T15:32:54.112557292Z" level=info msg="StopPodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" returns successfully" Feb 13 15:32:54.118726 containerd[1468]: time="2025-02-13T15:32:54.118689817Z" level=info msg="RemovePodSandbox for \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\"" Feb 13 15:32:54.127884 containerd[1468]: time="2025-02-13T15:32:54.127848441Z" level=info msg="Forcibly stopping sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\"" Feb 13 15:32:54.128012 containerd[1468]: time="2025-02-13T15:32:54.127965761Z" level=info msg="TearDown network for sandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" successfully" Feb 13 15:32:54.138160 containerd[1468]: time="2025-02-13T15:32:54.138121015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.138249 containerd[1468]: time="2025-02-13T15:32:54.138179815Z" level=info msg="RemovePodSandbox \"d70f3a30bc2b3e1397913fb4d38c166d0cdae9aac5bc55a547bea4dd83aee7f6\" returns successfully" Feb 13 15:32:54.138571 containerd[1468]: time="2025-02-13T15:32:54.138550640Z" level=info msg="StopPodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\"" Feb 13 15:32:54.138682 containerd[1468]: time="2025-02-13T15:32:54.138660898Z" level=info msg="TearDown network for sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" successfully" Feb 13 15:32:54.138682 containerd[1468]: time="2025-02-13T15:32:54.138675345Z" level=info msg="StopPodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" returns successfully" Feb 13 15:32:54.138893 containerd[1468]: time="2025-02-13T15:32:54.138873978Z" level=info msg="RemovePodSandbox for \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\"" Feb 13 15:32:54.138957 containerd[1468]: time="2025-02-13T15:32:54.138893234Z" level=info msg="Forcibly stopping sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\"" Feb 13 15:32:54.139018 containerd[1468]: time="2025-02-13T15:32:54.138980768Z" level=info msg="TearDown network for sandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" successfully" Feb 13 15:32:54.142525 containerd[1468]: time="2025-02-13T15:32:54.142498879Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.142587 containerd[1468]: time="2025-02-13T15:32:54.142534315Z" level=info msg="RemovePodSandbox \"0f992d520053d0f05035423098daaa872cc35c6e5953a890792597d9b4dbf493\" returns successfully" Feb 13 15:32:54.142753 containerd[1468]: time="2025-02-13T15:32:54.142724772Z" level=info msg="StopPodSandbox for \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\"" Feb 13 15:32:54.142840 containerd[1468]: time="2025-02-13T15:32:54.142822305Z" level=info msg="TearDown network for sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\" successfully" Feb 13 15:32:54.142870 containerd[1468]: time="2025-02-13T15:32:54.142839728Z" level=info msg="StopPodSandbox for \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\" returns successfully" Feb 13 15:32:54.143040 containerd[1468]: time="2025-02-13T15:32:54.143014556Z" level=info msg="RemovePodSandbox for \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\"" Feb 13 15:32:54.143077 containerd[1468]: time="2025-02-13T15:32:54.143038831Z" level=info msg="Forcibly stopping sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\"" Feb 13 15:32:54.143148 containerd[1468]: time="2025-02-13T15:32:54.143116257Z" level=info msg="TearDown network for sandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\" successfully" Feb 13 15:32:54.146608 containerd[1468]: time="2025-02-13T15:32:54.146579745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.146654 containerd[1468]: time="2025-02-13T15:32:54.146632073Z" level=info msg="RemovePodSandbox \"6181cafd23362c7520ac198a2a59e300781e45c3960b5ccb75630565de7e753b\" returns successfully" Feb 13 15:32:54.146845 containerd[1468]: time="2025-02-13T15:32:54.146820868Z" level=info msg="StopPodSandbox for \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\"" Feb 13 15:32:54.146939 containerd[1468]: time="2025-02-13T15:32:54.146923691Z" level=info msg="TearDown network for sandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\" successfully" Feb 13 15:32:54.146968 containerd[1468]: time="2025-02-13T15:32:54.146938178Z" level=info msg="StopPodSandbox for \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\" returns successfully" Feb 13 15:32:54.147149 containerd[1468]: time="2025-02-13T15:32:54.147129036Z" level=info msg="RemovePodSandbox for \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\"" Feb 13 15:32:54.147188 containerd[1468]: time="2025-02-13T15:32:54.147151628Z" level=info msg="Forcibly stopping sandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\"" Feb 13 15:32:54.147254 containerd[1468]: time="2025-02-13T15:32:54.147225637Z" level=info msg="TearDown network for sandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\" successfully" Feb 13 15:32:54.151006 containerd[1468]: time="2025-02-13T15:32:54.150980001Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.151070 containerd[1468]: time="2025-02-13T15:32:54.151020387Z" level=info msg="RemovePodSandbox \"0d5cac838b43fb83f006f7f31420417c30ef07aadabdda2164bbd7c3778a7e39\" returns successfully" Feb 13 15:32:54.151321 containerd[1468]: time="2025-02-13T15:32:54.151282318Z" level=info msg="StopPodSandbox for \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\"" Feb 13 15:32:54.151413 containerd[1468]: time="2025-02-13T15:32:54.151388308Z" level=info msg="TearDown network for sandbox \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\" successfully" Feb 13 15:32:54.151413 containerd[1468]: time="2025-02-13T15:32:54.151404849Z" level=info msg="StopPodSandbox for \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\" returns successfully" Feb 13 15:32:54.151701 containerd[1468]: time="2025-02-13T15:32:54.151668082Z" level=info msg="RemovePodSandbox for \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\"" Feb 13 15:32:54.151701 containerd[1468]: time="2025-02-13T15:32:54.151687368Z" level=info msg="Forcibly stopping sandbox \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\"" Feb 13 15:32:54.152043 containerd[1468]: time="2025-02-13T15:32:54.151748593Z" level=info msg="TearDown network for sandbox \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\" successfully" Feb 13 15:32:54.155093 containerd[1468]: time="2025-02-13T15:32:54.155070175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.155154 containerd[1468]: time="2025-02-13T15:32:54.155106053Z" level=info msg="RemovePodSandbox \"a5b73d0fcb2a99af8666f4b8e7dfbbfa4b31d5af2728ffc6ba8fff9d4c47f801\" returns successfully" Feb 13 15:32:54.155381 containerd[1468]: time="2025-02-13T15:32:54.155358757Z" level=info msg="StopPodSandbox for \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\"" Feb 13 15:32:54.155470 containerd[1468]: time="2025-02-13T15:32:54.155451821Z" level=info msg="TearDown network for sandbox \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\" successfully" Feb 13 15:32:54.155505 containerd[1468]: time="2025-02-13T15:32:54.155467671Z" level=info msg="StopPodSandbox for \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\" returns successfully" Feb 13 15:32:54.155726 containerd[1468]: time="2025-02-13T15:32:54.155704936Z" level=info msg="RemovePodSandbox for \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\"" Feb 13 15:32:54.155806 containerd[1468]: time="2025-02-13T15:32:54.155786589Z" level=info msg="Forcibly stopping sandbox \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\"" Feb 13 15:32:54.155883 containerd[1468]: time="2025-02-13T15:32:54.155857502Z" level=info msg="TearDown network for sandbox \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\" successfully" Feb 13 15:32:54.159250 containerd[1468]: time="2025-02-13T15:32:54.159227725Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.159308 containerd[1468]: time="2025-02-13T15:32:54.159264464Z" level=info msg="RemovePodSandbox \"805f55e6be6144dcd08800f65c82a03aa8fb822dd8eaed1f616632fcfa8e969d\" returns successfully" Feb 13 15:32:54.159505 containerd[1468]: time="2025-02-13T15:32:54.159481621Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\"" Feb 13 15:32:54.159594 containerd[1468]: time="2025-02-13T15:32:54.159574767Z" level=info msg="TearDown network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" successfully" Feb 13 15:32:54.159632 containerd[1468]: time="2025-02-13T15:32:54.159591689Z" level=info msg="StopPodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" returns successfully" Feb 13 15:32:54.159830 containerd[1468]: time="2025-02-13T15:32:54.159807694Z" level=info msg="RemovePodSandbox for \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\"" Feb 13 15:32:54.159928 containerd[1468]: time="2025-02-13T15:32:54.159830487Z" level=info msg="Forcibly stopping sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\"" Feb 13 15:32:54.159966 containerd[1468]: time="2025-02-13T15:32:54.159923391Z" level=info msg="TearDown network for sandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" successfully" Feb 13 15:32:54.164062 containerd[1468]: time="2025-02-13T15:32:54.164028813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.164128 containerd[1468]: time="2025-02-13T15:32:54.164070191Z" level=info msg="RemovePodSandbox \"7c3f4235f14e17ac8d206bfaf526c66438a1085e7262e1fa1382929e327beceb\" returns successfully" Feb 13 15:32:54.164458 containerd[1468]: time="2025-02-13T15:32:54.164438122Z" level=info msg="StopPodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\"" Feb 13 15:32:54.164546 containerd[1468]: time="2025-02-13T15:32:54.164529002Z" level=info msg="TearDown network for sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" successfully" Feb 13 15:32:54.164575 containerd[1468]: time="2025-02-13T15:32:54.164546284Z" level=info msg="StopPodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" returns successfully" Feb 13 15:32:54.164769 containerd[1468]: time="2025-02-13T15:32:54.164750688Z" level=info msg="RemovePodSandbox for \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\"" Feb 13 15:32:54.164808 containerd[1468]: time="2025-02-13T15:32:54.164772248Z" level=info msg="Forcibly stopping sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\"" Feb 13 15:32:54.164875 containerd[1468]: time="2025-02-13T15:32:54.164845686Z" level=info msg="TearDown network for sandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" successfully" Feb 13 15:32:54.168475 containerd[1468]: time="2025-02-13T15:32:54.168444328Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.168526 containerd[1468]: time="2025-02-13T15:32:54.168489593Z" level=info msg="RemovePodSandbox \"b8ee0f4187d3fedc712ede0a785785da36d93688a49ef8f7a6ff9a4924b9b674\" returns successfully" Feb 13 15:32:54.168724 containerd[1468]: time="2025-02-13T15:32:54.168702893Z" level=info msg="StopPodSandbox for \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\"" Feb 13 15:32:54.168817 containerd[1468]: time="2025-02-13T15:32:54.168794935Z" level=info msg="TearDown network for sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\" successfully" Feb 13 15:32:54.168817 containerd[1468]: time="2025-02-13T15:32:54.168810735Z" level=info msg="StopPodSandbox for \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\" returns successfully" Feb 13 15:32:54.169023 containerd[1468]: time="2025-02-13T15:32:54.168995412Z" level=info msg="RemovePodSandbox for \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\"" Feb 13 15:32:54.169023 containerd[1468]: time="2025-02-13T15:32:54.169016892Z" level=info msg="Forcibly stopping sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\"" Feb 13 15:32:54.169109 containerd[1468]: time="2025-02-13T15:32:54.169080812Z" level=info msg="TearDown network for sandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\" successfully" Feb 13 15:32:54.172480 containerd[1468]: time="2025-02-13T15:32:54.172453891Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.172542 containerd[1468]: time="2025-02-13T15:32:54.172490850Z" level=info msg="RemovePodSandbox \"866782dec6a38066b32b3c91ab709b9e76d0d7ca39f8c2c4ba028ea047ea390c\" returns successfully" Feb 13 15:32:54.172750 containerd[1468]: time="2025-02-13T15:32:54.172723216Z" level=info msg="StopPodSandbox for \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\"" Feb 13 15:32:54.172831 containerd[1468]: time="2025-02-13T15:32:54.172812083Z" level=info msg="TearDown network for sandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\" successfully" Feb 13 15:32:54.172874 containerd[1468]: time="2025-02-13T15:32:54.172828604Z" level=info msg="StopPodSandbox for \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\" returns successfully" Feb 13 15:32:54.173024 containerd[1468]: time="2025-02-13T15:32:54.173006888Z" level=info msg="RemovePodSandbox for \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\"" Feb 13 15:32:54.173088 containerd[1468]: time="2025-02-13T15:32:54.173024762Z" level=info msg="Forcibly stopping sandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\"" Feb 13 15:32:54.173134 containerd[1468]: time="2025-02-13T15:32:54.173091757Z" level=info msg="TearDown network for sandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\" successfully" Feb 13 15:32:54.176382 containerd[1468]: time="2025-02-13T15:32:54.176342998Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.176382 containerd[1468]: time="2025-02-13T15:32:54.176370770Z" level=info msg="RemovePodSandbox \"d96239199a1cb7139455f83162e086967e5934fa99545255be594c643d47ee1c\" returns successfully" Feb 13 15:32:54.176608 containerd[1468]: time="2025-02-13T15:32:54.176568831Z" level=info msg="StopPodSandbox for \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\"" Feb 13 15:32:54.176692 containerd[1468]: time="2025-02-13T15:32:54.176667666Z" level=info msg="TearDown network for sandbox \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\" successfully" Feb 13 15:32:54.176692 containerd[1468]: time="2025-02-13T15:32:54.176683947Z" level=info msg="StopPodSandbox for \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\" returns successfully" Feb 13 15:32:54.176869 containerd[1468]: time="2025-02-13T15:32:54.176847884Z" level=info msg="RemovePodSandbox for \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\"" Feb 13 15:32:54.176869 containerd[1468]: time="2025-02-13T15:32:54.176863814Z" level=info msg="Forcibly stopping sandbox \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\"" Feb 13 15:32:54.176962 containerd[1468]: time="2025-02-13T15:32:54.176941159Z" level=info msg="TearDown network for sandbox \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\" successfully" Feb 13 15:32:54.180338 containerd[1468]: time="2025-02-13T15:32:54.180310541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.180392 containerd[1468]: time="2025-02-13T15:32:54.180341940Z" level=info msg="RemovePodSandbox \"322ef70d8eeaa1578ed366f262367ef7812c86017539ef1cde7fa1b971b608d3\" returns successfully" Feb 13 15:32:54.180596 containerd[1468]: time="2025-02-13T15:32:54.180565590Z" level=info msg="StopPodSandbox for \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\"" Feb 13 15:32:54.180687 containerd[1468]: time="2025-02-13T15:32:54.180657092Z" level=info msg="TearDown network for sandbox \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\" successfully" Feb 13 15:32:54.180687 containerd[1468]: time="2025-02-13T15:32:54.180670146Z" level=info msg="StopPodSandbox for \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\" returns successfully" Feb 13 15:32:54.180882 containerd[1468]: time="2025-02-13T15:32:54.180866354Z" level=info msg="RemovePodSandbox for \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\"" Feb 13 15:32:54.180938 containerd[1468]: time="2025-02-13T15:32:54.180884899Z" level=info msg="Forcibly stopping sandbox \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\"" Feb 13 15:32:54.180990 containerd[1468]: time="2025-02-13T15:32:54.180958437Z" level=info msg="TearDown network for sandbox \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\" successfully" Feb 13 15:32:54.184605 containerd[1468]: time="2025-02-13T15:32:54.184569131Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.184700 containerd[1468]: time="2025-02-13T15:32:54.184635566Z" level=info msg="RemovePodSandbox \"d05334d30ad3868c2525feace8a4533258bd3630119d01bf3cc8d46112c41d60\" returns successfully" Feb 13 15:32:54.185079 containerd[1468]: time="2025-02-13T15:32:54.185029705Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\"" Feb 13 15:32:54.185444 containerd[1468]: time="2025-02-13T15:32:54.185393278Z" level=info msg="TearDown network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" successfully" Feb 13 15:32:54.185484 containerd[1468]: time="2025-02-13T15:32:54.185443172Z" level=info msg="StopPodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" returns successfully" Feb 13 15:32:54.185726 containerd[1468]: time="2025-02-13T15:32:54.185709882Z" level=info msg="RemovePodSandbox for \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\"" Feb 13 15:32:54.185759 containerd[1468]: time="2025-02-13T15:32:54.185726713Z" level=info msg="Forcibly stopping sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\"" Feb 13 15:32:54.188249 containerd[1468]: time="2025-02-13T15:32:54.188206496Z" level=info msg="TearDown network for sandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" successfully" Feb 13 15:32:54.191742 containerd[1468]: time="2025-02-13T15:32:54.191712173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.191802 containerd[1468]: time="2025-02-13T15:32:54.191744344Z" level=info msg="RemovePodSandbox \"1870aeaf09731cb2380c5a4e1fe705cd6f591ef1c69abe4b4a84a51807db20d5\" returns successfully" Feb 13 15:32:54.192019 containerd[1468]: time="2025-02-13T15:32:54.191982680Z" level=info msg="StopPodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\"" Feb 13 15:32:54.192088 containerd[1468]: time="2025-02-13T15:32:54.192058092Z" level=info msg="TearDown network for sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" successfully" Feb 13 15:32:54.192088 containerd[1468]: time="2025-02-13T15:32:54.192085214Z" level=info msg="StopPodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" returns successfully" Feb 13 15:32:54.192315 containerd[1468]: time="2025-02-13T15:32:54.192289687Z" level=info msg="RemovePodSandbox for \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\"" Feb 13 15:32:54.192315 containerd[1468]: time="2025-02-13T15:32:54.192311188Z" level=info msg="Forcibly stopping sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\"" Feb 13 15:32:54.192419 containerd[1468]: time="2025-02-13T15:32:54.192372683Z" level=info msg="TearDown network for sandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" successfully" Feb 13 15:32:54.195971 containerd[1468]: time="2025-02-13T15:32:54.195925238Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.195971 containerd[1468]: time="2025-02-13T15:32:54.195960294Z" level=info msg="RemovePodSandbox \"6c98208d6a807091ea9a789b07a63ebb0e3d1cd3f1d35f5b3a73475e43ea90dd\" returns successfully" Feb 13 15:32:54.196165 containerd[1468]: time="2025-02-13T15:32:54.196147735Z" level=info msg="StopPodSandbox for \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\"" Feb 13 15:32:54.196251 containerd[1468]: time="2025-02-13T15:32:54.196218077Z" level=info msg="TearDown network for sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\" successfully" Feb 13 15:32:54.196251 containerd[1468]: time="2025-02-13T15:32:54.196226653Z" level=info msg="StopPodSandbox for \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\" returns successfully" Feb 13 15:32:54.196468 containerd[1468]: time="2025-02-13T15:32:54.196447197Z" level=info msg="RemovePodSandbox for \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\"" Feb 13 15:32:54.196513 containerd[1468]: time="2025-02-13T15:32:54.196472494Z" level=info msg="Forcibly stopping sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\"" Feb 13 15:32:54.196585 containerd[1468]: time="2025-02-13T15:32:54.196549619Z" level=info msg="TearDown network for sandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\" successfully" Feb 13 15:32:54.199884 containerd[1468]: time="2025-02-13T15:32:54.199847176Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.199884 containerd[1468]: time="2025-02-13T15:32:54.199882833Z" level=info msg="RemovePodSandbox \"6471c94159e818350b5caf3fe0a0131a680298d347e22189085b9d01dca83945\" returns successfully" Feb 13 15:32:54.200115 containerd[1468]: time="2025-02-13T15:32:54.200073511Z" level=info msg="StopPodSandbox for \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\"" Feb 13 15:32:54.200204 containerd[1468]: time="2025-02-13T15:32:54.200177325Z" level=info msg="TearDown network for sandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\" successfully" Feb 13 15:32:54.200204 containerd[1468]: time="2025-02-13T15:32:54.200193866Z" level=info msg="StopPodSandbox for \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\" returns successfully" Feb 13 15:32:54.200444 containerd[1468]: time="2025-02-13T15:32:54.200422546Z" level=info msg="RemovePodSandbox for \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\"" Feb 13 15:32:54.200444 containerd[1468]: time="2025-02-13T15:32:54.200441141Z" level=info msg="Forcibly stopping sandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\"" Feb 13 15:32:54.200515 containerd[1468]: time="2025-02-13T15:32:54.200501755Z" level=info msg="TearDown network for sandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\" successfully" Feb 13 15:32:54.203803 containerd[1468]: time="2025-02-13T15:32:54.203777301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.203865 containerd[1468]: time="2025-02-13T15:32:54.203819941Z" level=info msg="RemovePodSandbox \"d226efb2af55b76747ae91d62026edf3f97192c8cfbc55998d26c44b18b859a3\" returns successfully" Feb 13 15:32:54.204125 containerd[1468]: time="2025-02-13T15:32:54.204100417Z" level=info msg="StopPodSandbox for \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\"" Feb 13 15:32:54.204194 containerd[1468]: time="2025-02-13T15:32:54.204179735Z" level=info msg="TearDown network for sandbox \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\" successfully" Feb 13 15:32:54.204194 containerd[1468]: time="2025-02-13T15:32:54.204191578Z" level=info msg="StopPodSandbox for \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\" returns successfully" Feb 13 15:32:54.204773 containerd[1468]: time="2025-02-13T15:32:54.204386102Z" level=info msg="RemovePodSandbox for \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\"" Feb 13 15:32:54.204773 containerd[1468]: time="2025-02-13T15:32:54.204405619Z" level=info msg="Forcibly stopping sandbox \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\"" Feb 13 15:32:54.204773 containerd[1468]: time="2025-02-13T15:32:54.204469298Z" level=info msg="TearDown network for sandbox \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\" successfully" Feb 13 15:32:54.208937 containerd[1468]: time="2025-02-13T15:32:54.208880565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.208999 containerd[1468]: time="2025-02-13T15:32:54.208937512Z" level=info msg="RemovePodSandbox \"4c4878feab06b0b96c88c77534a4e370779604fec2a7ede14f649f9bd435aed6\" returns successfully" Feb 13 15:32:54.209172 containerd[1468]: time="2025-02-13T15:32:54.209142296Z" level=info msg="StopPodSandbox for \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\"" Feb 13 15:32:54.209261 containerd[1468]: time="2025-02-13T15:32:54.209231323Z" level=info msg="TearDown network for sandbox \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\" successfully" Feb 13 15:32:54.209261 containerd[1468]: time="2025-02-13T15:32:54.209248505Z" level=info msg="StopPodSandbox for \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\" returns successfully" Feb 13 15:32:54.209509 containerd[1468]: time="2025-02-13T15:32:54.209484518Z" level=info msg="RemovePodSandbox for \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\"" Feb 13 15:32:54.209576 containerd[1468]: time="2025-02-13T15:32:54.209511209Z" level=info msg="Forcibly stopping sandbox \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\"" Feb 13 15:32:54.209607 containerd[1468]: time="2025-02-13T15:32:54.209584256Z" level=info msg="TearDown network for sandbox \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\" successfully" Feb 13 15:32:54.212885 containerd[1468]: time="2025-02-13T15:32:54.212858067Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.212953 containerd[1468]: time="2025-02-13T15:32:54.212895528Z" level=info msg="RemovePodSandbox \"43cd4f65b78fd88df8a554fa48ee02942f4553c454040fe2ee1ed656022fad6e\" returns successfully" Feb 13 15:32:54.213181 containerd[1468]: time="2025-02-13T15:32:54.213156518Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\"" Feb 13 15:32:54.213276 containerd[1468]: time="2025-02-13T15:32:54.213256475Z" level=info msg="TearDown network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" successfully" Feb 13 15:32:54.213276 containerd[1468]: time="2025-02-13T15:32:54.213273778Z" level=info msg="StopPodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" returns successfully" Feb 13 15:32:54.213523 containerd[1468]: time="2025-02-13T15:32:54.213498569Z" level=info msg="RemovePodSandbox for \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\"" Feb 13 15:32:54.213569 containerd[1468]: time="2025-02-13T15:32:54.213526502Z" level=info msg="Forcibly stopping sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\"" Feb 13 15:32:54.213646 containerd[1468]: time="2025-02-13T15:32:54.213602935Z" level=info msg="TearDown network for sandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" successfully" Feb 13 15:32:54.216918 containerd[1468]: time="2025-02-13T15:32:54.216874183Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.216974 containerd[1468]: time="2025-02-13T15:32:54.216920109Z" level=info msg="RemovePodSandbox \"46f667aac332bcb920bb474ba927d5bc5710d6ae5d32024c8a1a3e8ffb26b0b5\" returns successfully" Feb 13 15:32:54.217161 containerd[1468]: time="2025-02-13T15:32:54.217138599Z" level=info msg="StopPodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\"" Feb 13 15:32:54.217239 containerd[1468]: time="2025-02-13T15:32:54.217223428Z" level=info msg="TearDown network for sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" successfully" Feb 13 15:32:54.217261 containerd[1468]: time="2025-02-13T15:32:54.217238045Z" level=info msg="StopPodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" returns successfully" Feb 13 15:32:54.217487 containerd[1468]: time="2025-02-13T15:32:54.217441447Z" level=info msg="RemovePodSandbox for \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\"" Feb 13 15:32:54.217487 containerd[1468]: time="2025-02-13T15:32:54.217467235Z" level=info msg="Forcibly stopping sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\"" Feb 13 15:32:54.217560 containerd[1468]: time="2025-02-13T15:32:54.217542236Z" level=info msg="TearDown network for sandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" successfully" Feb 13 15:32:54.220852 containerd[1468]: time="2025-02-13T15:32:54.220820557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.220852 containerd[1468]: time="2025-02-13T15:32:54.220850182Z" level=info msg="RemovePodSandbox \"fef759cb9e49dd62ed196cf969291a295b23a4f599c2ddd5ded197ee720efe36\" returns successfully" Feb 13 15:32:54.221239 containerd[1468]: time="2025-02-13T15:32:54.221077079Z" level=info msg="StopPodSandbox for \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\"" Feb 13 15:32:54.221239 containerd[1468]: time="2025-02-13T15:32:54.221175734Z" level=info msg="TearDown network for sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\" successfully" Feb 13 15:32:54.221239 containerd[1468]: time="2025-02-13T15:32:54.221189259Z" level=info msg="StopPodSandbox for \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\" returns successfully" Feb 13 15:32:54.221420 containerd[1468]: time="2025-02-13T15:32:54.221397119Z" level=info msg="RemovePodSandbox for \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\"" Feb 13 15:32:54.221467 containerd[1468]: time="2025-02-13T15:32:54.221424340Z" level=info msg="Forcibly stopping sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\"" Feb 13 15:32:54.221517 containerd[1468]: time="2025-02-13T15:32:54.221490494Z" level=info msg="TearDown network for sandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\" successfully" Feb 13 15:32:54.225182 containerd[1468]: time="2025-02-13T15:32:54.225154278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.225238 containerd[1468]: time="2025-02-13T15:32:54.225192700Z" level=info msg="RemovePodSandbox \"7f6dcfb49b3ec457f27d0ca8b5036b1e8e5143d7731a3e51c6dda8fd619dbda7\" returns successfully" Feb 13 15:32:54.225444 containerd[1468]: time="2025-02-13T15:32:54.225423974Z" level=info msg="StopPodSandbox for \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\"" Feb 13 15:32:54.225534 containerd[1468]: time="2025-02-13T15:32:54.225514974Z" level=info msg="TearDown network for sandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\" successfully" Feb 13 15:32:54.225580 containerd[1468]: time="2025-02-13T15:32:54.225534180Z" level=info msg="StopPodSandbox for \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\" returns successfully" Feb 13 15:32:54.225783 containerd[1468]: time="2025-02-13T15:32:54.225737403Z" level=info msg="RemovePodSandbox for \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\"" Feb 13 15:32:54.225783 containerd[1468]: time="2025-02-13T15:32:54.225763141Z" level=info msg="Forcibly stopping sandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\"" Feb 13 15:32:54.225851 containerd[1468]: time="2025-02-13T15:32:54.225826971Z" level=info msg="TearDown network for sandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\" successfully" Feb 13 15:32:54.229287 containerd[1468]: time="2025-02-13T15:32:54.229259571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.229367 containerd[1468]: time="2025-02-13T15:32:54.229292913Z" level=info msg="RemovePodSandbox \"1e5ed6e8b66b9400d8bfdd5f31347ea305c9814ea340b774a52f3c3e7d9f3468\" returns successfully" Feb 13 15:32:54.229578 containerd[1468]: time="2025-02-13T15:32:54.229534406Z" level=info msg="StopPodSandbox for \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\"" Feb 13 15:32:54.229664 containerd[1468]: time="2025-02-13T15:32:54.229638802Z" level=info msg="TearDown network for sandbox \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\" successfully" Feb 13 15:32:54.229664 containerd[1468]: time="2025-02-13T15:32:54.229657056Z" level=info msg="StopPodSandbox for \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\" returns successfully" Feb 13 15:32:54.229916 containerd[1468]: time="2025-02-13T15:32:54.229874244Z" level=info msg="RemovePodSandbox for \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\"" Feb 13 15:32:54.229961 containerd[1468]: time="2025-02-13T15:32:54.229916593Z" level=info msg="Forcibly stopping sandbox \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\"" Feb 13 15:32:54.230027 containerd[1468]: time="2025-02-13T15:32:54.229993828Z" level=info msg="TearDown network for sandbox \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\" successfully" Feb 13 15:32:54.234562 containerd[1468]: time="2025-02-13T15:32:54.234514420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.234655 containerd[1468]: time="2025-02-13T15:32:54.234571306Z" level=info msg="RemovePodSandbox \"66078ab2ad793981a1112f1ee7900e186a554a7abc04abff820bf9aed9a2419d\" returns successfully" Feb 13 15:32:54.234951 containerd[1468]: time="2025-02-13T15:32:54.234932614Z" level=info msg="StopPodSandbox for \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\"" Feb 13 15:32:54.235035 containerd[1468]: time="2025-02-13T15:32:54.235021962Z" level=info msg="TearDown network for sandbox \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\" successfully" Feb 13 15:32:54.235035 containerd[1468]: time="2025-02-13T15:32:54.235033634Z" level=info msg="StopPodSandbox for \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\" returns successfully" Feb 13 15:32:54.235272 containerd[1468]: time="2025-02-13T15:32:54.235258416Z" level=info msg="RemovePodSandbox for \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\"" Feb 13 15:32:54.235311 containerd[1468]: time="2025-02-13T15:32:54.235275748Z" level=info msg="Forcibly stopping sandbox \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\"" Feb 13 15:32:54.235385 containerd[1468]: time="2025-02-13T15:32:54.235332314Z" level=info msg="TearDown network for sandbox \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\" successfully" Feb 13 15:32:54.238994 containerd[1468]: time="2025-02-13T15:32:54.238964920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.239055 containerd[1468]: time="2025-02-13T15:32:54.239007229Z" level=info msg="RemovePodSandbox \"0828f2cdbb9112e9862cf132a92a1f32a84d3ce7a5c07980b31641c5f74bb77b\" returns successfully" Feb 13 15:32:54.239370 containerd[1468]: time="2025-02-13T15:32:54.239220649Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\"" Feb 13 15:32:54.239370 containerd[1468]: time="2025-02-13T15:32:54.239312141Z" level=info msg="TearDown network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" successfully" Feb 13 15:32:54.239370 containerd[1468]: time="2025-02-13T15:32:54.239322140Z" level=info msg="StopPodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" returns successfully" Feb 13 15:32:54.239531 containerd[1468]: time="2025-02-13T15:32:54.239510503Z" level=info msg="RemovePodSandbox for \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\"" Feb 13 15:32:54.239573 containerd[1468]: time="2025-02-13T15:32:54.239532635Z" level=info msg="Forcibly stopping sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\"" Feb 13 15:32:54.239641 containerd[1468]: time="2025-02-13T15:32:54.239601474Z" level=info msg="TearDown network for sandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" successfully" Feb 13 15:32:54.243146 containerd[1468]: time="2025-02-13T15:32:54.243118142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.243215 containerd[1468]: time="2025-02-13T15:32:54.243146595Z" level=info msg="RemovePodSandbox \"3f85c108599c4411561ddf52f36bce955c2fc92dd8376fbc83b222ec7b164903\" returns successfully" Feb 13 15:32:54.243405 containerd[1468]: time="2025-02-13T15:32:54.243379302Z" level=info msg="StopPodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\"" Feb 13 15:32:54.243501 containerd[1468]: time="2025-02-13T15:32:54.243471345Z" level=info msg="TearDown network for sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" successfully" Feb 13 15:32:54.243501 containerd[1468]: time="2025-02-13T15:32:54.243487415Z" level=info msg="StopPodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" returns successfully" Feb 13 15:32:54.243696 containerd[1468]: time="2025-02-13T15:32:54.243662754Z" level=info msg="RemovePodSandbox for \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\"" Feb 13 15:32:54.243696 containerd[1468]: time="2025-02-13T15:32:54.243685647Z" level=info msg="Forcibly stopping sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\"" Feb 13 15:32:54.243792 containerd[1468]: time="2025-02-13T15:32:54.243757582Z" level=info msg="TearDown network for sandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" successfully" Feb 13 15:32:54.247204 containerd[1468]: time="2025-02-13T15:32:54.247155487Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.247204 containerd[1468]: time="2025-02-13T15:32:54.247188499Z" level=info msg="RemovePodSandbox \"376a3c11e2a5fb28ac2c0fb655e0c613f2198536c121bed3d75220f300a7d8be\" returns successfully" Feb 13 15:32:54.247428 containerd[1468]: time="2025-02-13T15:32:54.247405556Z" level=info msg="StopPodSandbox for \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\"" Feb 13 15:32:54.247522 containerd[1468]: time="2025-02-13T15:32:54.247498961Z" level=info msg="TearDown network for sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\" successfully" Feb 13 15:32:54.247522 containerd[1468]: time="2025-02-13T15:32:54.247514540Z" level=info msg="StopPodSandbox for \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\" returns successfully" Feb 13 15:32:54.247788 containerd[1468]: time="2025-02-13T15:32:54.247754390Z" level=info msg="RemovePodSandbox for \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\"" Feb 13 15:32:54.247841 containerd[1468]: time="2025-02-13T15:32:54.247787813Z" level=info msg="Forcibly stopping sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\"" Feb 13 15:32:54.248027 containerd[1468]: time="2025-02-13T15:32:54.247865409Z" level=info msg="TearDown network for sandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\" successfully" Feb 13 15:32:54.251371 containerd[1468]: time="2025-02-13T15:32:54.251337844Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.251447 containerd[1468]: time="2025-02-13T15:32:54.251377980Z" level=info msg="RemovePodSandbox \"4883e38ff5bb281dfed647a7ac467486f86c66866308979bce276f0a29a9c651\" returns successfully" Feb 13 15:32:54.251657 containerd[1468]: time="2025-02-13T15:32:54.251637857Z" level=info msg="StopPodSandbox for \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\"" Feb 13 15:32:54.251750 containerd[1468]: time="2025-02-13T15:32:54.251731503Z" level=info msg="TearDown network for sandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\" successfully" Feb 13 15:32:54.251750 containerd[1468]: time="2025-02-13T15:32:54.251747503Z" level=info msg="StopPodSandbox for \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\" returns successfully" Feb 13 15:32:54.251987 containerd[1468]: time="2025-02-13T15:32:54.251964840Z" level=info msg="RemovePodSandbox for \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\"" Feb 13 15:32:54.252035 containerd[1468]: time="2025-02-13T15:32:54.251988094Z" level=info msg="Forcibly stopping sandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\"" Feb 13 15:32:54.252095 containerd[1468]: time="2025-02-13T15:32:54.252062504Z" level=info msg="TearDown network for sandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\" successfully" Feb 13 15:32:54.255697 containerd[1468]: time="2025-02-13T15:32:54.255674430Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.255758 containerd[1468]: time="2025-02-13T15:32:54.255725516Z" level=info msg="RemovePodSandbox \"db7f1759cc73e6a1bbb8074c3e5c57b9519bce697b2bae81b03ed8e8624b1505\" returns successfully" Feb 13 15:32:54.255994 containerd[1468]: time="2025-02-13T15:32:54.255974112Z" level=info msg="StopPodSandbox for \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\"" Feb 13 15:32:54.256160 containerd[1468]: time="2025-02-13T15:32:54.256137860Z" level=info msg="TearDown network for sandbox \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\" successfully" Feb 13 15:32:54.256160 containerd[1468]: time="2025-02-13T15:32:54.256152668Z" level=info msg="StopPodSandbox for \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\" returns successfully" Feb 13 15:32:54.256408 containerd[1468]: time="2025-02-13T15:32:54.256384072Z" level=info msg="RemovePodSandbox for \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\"" Feb 13 15:32:54.256456 containerd[1468]: time="2025-02-13T15:32:54.256410592Z" level=info msg="Forcibly stopping sandbox \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\"" Feb 13 15:32:54.256514 containerd[1468]: time="2025-02-13T15:32:54.256482807Z" level=info msg="TearDown network for sandbox \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\" successfully" Feb 13 15:32:54.259895 containerd[1468]: time="2025-02-13T15:32:54.259854363Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.259981 containerd[1468]: time="2025-02-13T15:32:54.259895941Z" level=info msg="RemovePodSandbox \"5d9b14b314e2070f8f5b4646a8aab12f082b7ddc616b3e0868d138d09040e20b\" returns successfully" Feb 13 15:32:54.260168 containerd[1468]: time="2025-02-13T15:32:54.260133517Z" level=info msg="StopPodSandbox for \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\"" Feb 13 15:32:54.260251 containerd[1468]: time="2025-02-13T15:32:54.260227793Z" level=info msg="TearDown network for sandbox \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\" successfully" Feb 13 15:32:54.260251 containerd[1468]: time="2025-02-13T15:32:54.260245236Z" level=info msg="StopPodSandbox for \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\" returns successfully" Feb 13 15:32:54.260480 containerd[1468]: time="2025-02-13T15:32:54.260457404Z" level=info msg="RemovePodSandbox for \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\"" Feb 13 15:32:54.260529 containerd[1468]: time="2025-02-13T15:32:54.260482130Z" level=info msg="Forcibly stopping sandbox \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\"" Feb 13 15:32:54.260586 containerd[1468]: time="2025-02-13T15:32:54.260555227Z" level=info msg="TearDown network for sandbox \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\" successfully" Feb 13 15:32:54.263847 containerd[1468]: time="2025-02-13T15:32:54.263817909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.263896 containerd[1468]: time="2025-02-13T15:32:54.263855500Z" level=info msg="RemovePodSandbox \"34420aa726db1acc07c332585e478a0b7b32839b7c0d4fb6bb2e0b976e80d72f\" returns successfully" Feb 13 15:32:54.264146 containerd[1468]: time="2025-02-13T15:32:54.264125126Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\"" Feb 13 15:32:54.264240 containerd[1468]: time="2025-02-13T15:32:54.264222669Z" level=info msg="TearDown network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" successfully" Feb 13 15:32:54.264270 containerd[1468]: time="2025-02-13T15:32:54.264238509Z" level=info msg="StopPodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" returns successfully" Feb 13 15:32:54.264476 containerd[1468]: time="2025-02-13T15:32:54.264448052Z" level=info msg="RemovePodSandbox for \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\"" Feb 13 15:32:54.264476 containerd[1468]: time="2025-02-13T15:32:54.264469352Z" level=info msg="Forcibly stopping sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\"" Feb 13 15:32:54.264574 containerd[1468]: time="2025-02-13T15:32:54.264540375Z" level=info msg="TearDown network for sandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" successfully" Feb 13 15:32:54.267996 containerd[1468]: time="2025-02-13T15:32:54.267952898Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.267996 containerd[1468]: time="2025-02-13T15:32:54.267993093Z" level=info msg="RemovePodSandbox \"604c2e3827c2c3b064c1d9c747097791c5e6e999c7e71844c7a89ddb19305793\" returns successfully" Feb 13 15:32:54.268255 containerd[1468]: time="2025-02-13T15:32:54.268215119Z" level=info msg="StopPodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\"" Feb 13 15:32:54.268321 containerd[1468]: time="2025-02-13T15:32:54.268300980Z" level=info msg="TearDown network for sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" successfully" Feb 13 15:32:54.268321 containerd[1468]: time="2025-02-13T15:32:54.268319605Z" level=info msg="StopPodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" returns successfully" Feb 13 15:32:54.268563 containerd[1468]: time="2025-02-13T15:32:54.268517897Z" level=info msg="RemovePodSandbox for \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\"" Feb 13 15:32:54.268563 containerd[1468]: time="2025-02-13T15:32:54.268541722Z" level=info msg="Forcibly stopping sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\"" Feb 13 15:32:54.268668 containerd[1468]: time="2025-02-13T15:32:54.268634095Z" level=info msg="TearDown network for sandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" successfully" Feb 13 15:32:54.271941 containerd[1468]: time="2025-02-13T15:32:54.271889874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.271941 containerd[1468]: time="2025-02-13T15:32:54.271932283Z" level=info msg="RemovePodSandbox \"97aa4403fba49db8d21e44e437bbd9d47491e37f56bebf621ed162621950e544\" returns successfully" Feb 13 15:32:54.272184 containerd[1468]: time="2025-02-13T15:32:54.272164068Z" level=info msg="StopPodSandbox for \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\"" Feb 13 15:32:54.272276 containerd[1468]: time="2025-02-13T15:32:54.272252094Z" level=info msg="TearDown network for sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\" successfully" Feb 13 15:32:54.272276 containerd[1468]: time="2025-02-13T15:32:54.272269908Z" level=info msg="StopPodSandbox for \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\" returns successfully" Feb 13 15:32:54.272520 containerd[1468]: time="2025-02-13T15:32:54.272489449Z" level=info msg="RemovePodSandbox for \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\"" Feb 13 15:32:54.272520 containerd[1468]: time="2025-02-13T15:32:54.272510489Z" level=info msg="Forcibly stopping sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\"" Feb 13 15:32:54.272618 containerd[1468]: time="2025-02-13T15:32:54.272578166Z" level=info msg="TearDown network for sandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\" successfully" Feb 13 15:32:54.276056 containerd[1468]: time="2025-02-13T15:32:54.276026405Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.276112 containerd[1468]: time="2025-02-13T15:32:54.276069015Z" level=info msg="RemovePodSandbox \"317ebcbcba8b53520fe311df93c340d111756a3eea887b8ee9f0d29a8b23253f\" returns successfully" Feb 13 15:32:54.276311 containerd[1468]: time="2025-02-13T15:32:54.276286253Z" level=info msg="StopPodSandbox for \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\"" Feb 13 15:32:54.276400 containerd[1468]: time="2025-02-13T15:32:54.276371713Z" level=info msg="TearDown network for sandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\" successfully" Feb 13 15:32:54.276400 containerd[1468]: time="2025-02-13T15:32:54.276388034Z" level=info msg="StopPodSandbox for \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\" returns successfully" Feb 13 15:32:54.276639 containerd[1468]: time="2025-02-13T15:32:54.276606133Z" level=info msg="RemovePodSandbox for \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\"" Feb 13 15:32:54.276712 containerd[1468]: time="2025-02-13T15:32:54.276639645Z" level=info msg="Forcibly stopping sandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\"" Feb 13 15:32:54.276748 containerd[1468]: time="2025-02-13T15:32:54.276711811Z" level=info msg="TearDown network for sandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\" successfully" Feb 13 15:32:54.282917 containerd[1468]: time="2025-02-13T15:32:54.282880756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.282969 containerd[1468]: time="2025-02-13T15:32:54.282931811Z" level=info msg="RemovePodSandbox \"dfdd467df3c00edd5e007b9f7abc937e4860ffac95a17b1a0196399cd24c365e\" returns successfully" Feb 13 15:32:54.283189 containerd[1468]: time="2025-02-13T15:32:54.283169487Z" level=info msg="StopPodSandbox for \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\"" Feb 13 15:32:54.283295 containerd[1468]: time="2025-02-13T15:32:54.283272671Z" level=info msg="TearDown network for sandbox \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\" successfully" Feb 13 15:32:54.283295 containerd[1468]: time="2025-02-13T15:32:54.283288661Z" level=info msg="StopPodSandbox for \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\" returns successfully" Feb 13 15:32:54.283544 containerd[1468]: time="2025-02-13T15:32:54.283511889Z" level=info msg="RemovePodSandbox for \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\"" Feb 13 15:32:54.283544 containerd[1468]: time="2025-02-13T15:32:54.283534091Z" level=info msg="Forcibly stopping sandbox \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\"" Feb 13 15:32:54.283641 containerd[1468]: time="2025-02-13T15:32:54.283607048Z" level=info msg="TearDown network for sandbox \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\" successfully" Feb 13 15:32:54.286862 containerd[1468]: time="2025-02-13T15:32:54.286829584Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.286914 containerd[1468]: time="2025-02-13T15:32:54.286864389Z" level=info msg="RemovePodSandbox \"cdb296e40d464a2c19f651b16f36bb11c75a68227ae5f7340229f374abcaf587\" returns successfully" Feb 13 15:32:54.287085 containerd[1468]: time="2025-02-13T15:32:54.287065286Z" level=info msg="StopPodSandbox for \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\"" Feb 13 15:32:54.287206 containerd[1468]: time="2025-02-13T15:32:54.287150195Z" level=info msg="TearDown network for sandbox \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\" successfully" Feb 13 15:32:54.287206 containerd[1468]: time="2025-02-13T15:32:54.287161837Z" level=info msg="StopPodSandbox for \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\" returns successfully" Feb 13 15:32:54.287938 containerd[1468]: time="2025-02-13T15:32:54.287370500Z" level=info msg="RemovePodSandbox for \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\"" Feb 13 15:32:54.287938 containerd[1468]: time="2025-02-13T15:32:54.287393793Z" level=info msg="Forcibly stopping sandbox \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\"" Feb 13 15:32:54.287938 containerd[1468]: time="2025-02-13T15:32:54.287480606Z" level=info msg="TearDown network for sandbox \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\" successfully" Feb 13 15:32:54.290793 containerd[1468]: time="2025-02-13T15:32:54.290761301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:32:54.290852 containerd[1468]: time="2025-02-13T15:32:54.290794793Z" level=info msg="RemovePodSandbox \"b3f63c826992899732b02fc0865f20aebda85156324d02039e345d0dea7390ee\" returns successfully" Feb 13 15:32:55.165310 systemd[1]: run-containerd-runc-k8s.io-a8754004b6c53107755f311f104fd71d8d0ef0caea6d74c7d1ce5138c6e241ed-runc.8WTsFf.mount: Deactivated successfully. Feb 13 15:32:56.044111 systemd[1]: Started sshd@17-10.0.0.113:22-10.0.0.1:58142.service - OpenSSH per-connection server daemon (10.0.0.1:58142). Feb 13 15:32:56.083720 sshd[6027]: Accepted publickey for core from 10.0.0.1 port 58142 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:56.085201 sshd-session[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:56.089288 systemd-logind[1451]: New session 18 of user core. Feb 13 15:32:56.096031 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:32:56.211945 sshd[6029]: Connection closed by 10.0.0.1 port 58142 Feb 13 15:32:56.212369 sshd-session[6027]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:56.219813 systemd[1]: sshd@17-10.0.0.113:22-10.0.0.1:58142.service: Deactivated successfully. Feb 13 15:32:56.221763 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:32:56.223480 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:32:56.230207 systemd[1]: Started sshd@18-10.0.0.113:22-10.0.0.1:58152.service - OpenSSH per-connection server daemon (10.0.0.1:58152). Feb 13 15:32:56.231057 systemd-logind[1451]: Removed session 18. Feb 13 15:32:56.264914 sshd[6041]: Accepted publickey for core from 10.0.0.1 port 58152 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:56.266369 sshd-session[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:56.269978 systemd-logind[1451]: New session 19 of user core. Feb 13 15:32:56.285018 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:32:56.540159 sshd[6043]: Connection closed by 10.0.0.1 port 58152 Feb 13 15:32:56.540611 sshd-session[6041]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:56.548172 systemd[1]: sshd@18-10.0.0.113:22-10.0.0.1:58152.service: Deactivated successfully. Feb 13 15:32:56.550207 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:32:56.551993 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:32:56.559324 systemd[1]: Started sshd@19-10.0.0.113:22-10.0.0.1:58168.service - OpenSSH per-connection server daemon (10.0.0.1:58168). Feb 13 15:32:56.560126 systemd-logind[1451]: Removed session 19. Feb 13 15:32:56.597475 sshd[6053]: Accepted publickey for core from 10.0.0.1 port 58168 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:56.598922 sshd-session[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:56.603068 systemd-logind[1451]: New session 20 of user core. Feb 13 15:32:56.612025 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:32:57.933756 systemd[1]: run-containerd-runc-k8s.io-a8754004b6c53107755f311f104fd71d8d0ef0caea6d74c7d1ce5138c6e241ed-runc.EdOfzM.mount: Deactivated successfully. Feb 13 15:32:58.515738 sshd[6055]: Connection closed by 10.0.0.1 port 58168 Feb 13 15:32:58.516512 sshd-session[6053]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:58.534605 systemd[1]: Started sshd@20-10.0.0.113:22-10.0.0.1:58176.service - OpenSSH per-connection server daemon (10.0.0.1:58176). Feb 13 15:32:58.535404 systemd[1]: sshd@19-10.0.0.113:22-10.0.0.1:58168.service: Deactivated successfully. Feb 13 15:32:58.538085 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:32:58.540091 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:32:58.541238 systemd-logind[1451]: Removed session 20. Feb 13 15:32:58.574652 sshd[6092]: Accepted publickey for core from 10.0.0.1 port 58176 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:58.575966 sshd-session[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:58.579673 systemd-logind[1451]: New session 21 of user core. Feb 13 15:32:58.588047 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:32:58.801578 sshd[6096]: Connection closed by 10.0.0.1 port 58176 Feb 13 15:32:58.802002 sshd-session[6092]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:58.810021 systemd[1]: sshd@20-10.0.0.113:22-10.0.0.1:58176.service: Deactivated successfully. Feb 13 15:32:58.812307 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:32:58.813980 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:32:58.821392 systemd[1]: Started sshd@21-10.0.0.113:22-10.0.0.1:58182.service - OpenSSH per-connection server daemon (10.0.0.1:58182). Feb 13 15:32:58.822215 systemd-logind[1451]: Removed session 21. Feb 13 15:32:58.853493 sshd[6106]: Accepted publickey for core from 10.0.0.1 port 58182 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:32:58.854844 sshd-session[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:58.858348 systemd-logind[1451]: New session 22 of user core. Feb 13 15:32:58.867028 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:32:58.969543 sshd[6108]: Connection closed by 10.0.0.1 port 58182 Feb 13 15:32:58.969926 sshd-session[6106]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:58.973853 systemd[1]: sshd@21-10.0.0.113:22-10.0.0.1:58182.service: Deactivated successfully. Feb 13 15:32:58.975978 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:32:58.976524 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:32:58.977341 systemd-logind[1451]: Removed session 22. Feb 13 15:33:03.988353 systemd[1]: Started sshd@22-10.0.0.113:22-10.0.0.1:58196.service - OpenSSH per-connection server daemon (10.0.0.1:58196). Feb 13 15:33:04.027969 sshd[6126]: Accepted publickey for core from 10.0.0.1 port 58196 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:33:04.029492 sshd-session[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:04.033375 systemd-logind[1451]: New session 23 of user core. Feb 13 15:33:04.042046 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:33:04.151609 sshd[6128]: Connection closed by 10.0.0.1 port 58196 Feb 13 15:33:04.152014 sshd-session[6126]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:04.156318 systemd[1]: sshd@22-10.0.0.113:22-10.0.0.1:58196.service: Deactivated successfully. Feb 13 15:33:04.158529 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:33:04.159240 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:33:04.160152 systemd-logind[1451]: Removed session 23. Feb 13 15:33:09.164264 systemd[1]: Started sshd@23-10.0.0.113:22-10.0.0.1:48866.service - OpenSSH per-connection server daemon (10.0.0.1:48866). Feb 13 15:33:09.201788 sshd[6141]: Accepted publickey for core from 10.0.0.1 port 48866 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:33:09.203456 sshd-session[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:09.207200 systemd-logind[1451]: New session 24 of user core. Feb 13 15:33:09.214039 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:33:09.316800 sshd[6143]: Connection closed by 10.0.0.1 port 48866 Feb 13 15:33:09.317177 sshd-session[6141]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:09.321281 systemd[1]: sshd@23-10.0.0.113:22-10.0.0.1:48866.service: Deactivated successfully. Feb 13 15:33:09.323439 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:33:09.324027 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:33:09.324793 systemd-logind[1451]: Removed session 24. Feb 13 15:33:09.542183 systemd[1]: run-containerd-runc-k8s.io-6f95138dffcd1dc28762e9cddc1cfeb5e5b0ea5ee1d4f9212c7bf0ea9677b326-runc.kbT04l.mount: Deactivated successfully. Feb 13 15:33:11.126630 kubelet[2663]: E0213 15:33:11.126578 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:14.328781 systemd[1]: Started sshd@24-10.0.0.113:22-10.0.0.1:48868.service - OpenSSH per-connection server daemon (10.0.0.1:48868). Feb 13 15:33:14.366131 sshd[6180]: Accepted publickey for core from 10.0.0.1 port 48868 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:33:14.367606 sshd-session[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:14.371690 systemd-logind[1451]: New session 25 of user core. Feb 13 15:33:14.378044 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:33:14.481844 sshd[6182]: Connection closed by 10.0.0.1 port 48868 Feb 13 15:33:14.482249 sshd-session[6180]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:14.485989 systemd[1]: sshd@24-10.0.0.113:22-10.0.0.1:48868.service: Deactivated successfully. Feb 13 15:33:14.488091 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:33:14.488675 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:33:14.489566 systemd-logind[1451]: Removed session 25. Feb 13 15:33:15.126266 kubelet[2663]: E0213 15:33:15.126224 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:33:19.494129 systemd[1]: Started sshd@25-10.0.0.113:22-10.0.0.1:51626.service - OpenSSH per-connection server daemon (10.0.0.1:51626). Feb 13 15:33:19.533042 sshd[6202]: Accepted publickey for core from 10.0.0.1 port 51626 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:33:19.534772 sshd-session[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:19.538992 systemd-logind[1451]: New session 26 of user core. Feb 13 15:33:19.549090 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:33:19.657185 sshd[6204]: Connection closed by 10.0.0.1 port 51626 Feb 13 15:33:19.657540 sshd-session[6202]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:19.661796 systemd[1]: sshd@25-10.0.0.113:22-10.0.0.1:51626.service: Deactivated successfully. Feb 13 15:33:19.663989 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:33:19.664622 systemd-logind[1451]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:33:19.665662 systemd-logind[1451]: Removed session 26.