Feb 13 15:26:38.930644 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:26:38.930665 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:26:38.930677 kernel: BIOS-provided physical RAM map: Feb 13 15:26:38.930684 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:26:38.930690 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:26:38.930696 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:26:38.930703 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:26:38.930710 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:26:38.930716 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:26:38.930722 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:26:38.930732 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 15:26:38.930738 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:26:38.930748 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:26:38.930755 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:26:38.930765 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:26:38.930772 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:26:38.930781 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:26:38.930788 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:26:38.930795 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:26:38.930802 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:26:38.930808 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:26:38.930815 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:26:38.930822 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:26:38.930828 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:26:38.930835 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:26:38.930842 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:26:38.930848 kernel: NX (Execute Disable) protection: active Feb 13 15:26:38.930857 kernel: APIC: Static calls initialized Feb 13 15:26:38.930864 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:26:38.930871 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:26:38.930878 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:26:38.930884 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:26:38.930891 kernel: extended physical RAM map: Feb 13 15:26:38.930897 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:26:38.930904 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:26:38.930911 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:26:38.930918 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:26:38.930924 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:26:38.930934 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:26:38.930941 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:26:38.930951 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 15:26:38.930958 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 15:26:38.930965 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 15:26:38.930972 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 15:26:38.930979 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 15:26:38.930992 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:26:38.932244 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:26:38.932253 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:26:38.932260 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:26:38.932269 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:26:38.932277 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:26:38.932284 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:26:38.932291 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:26:38.932298 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:26:38.932310 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:26:38.932317 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:26:38.932324 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:26:38.932331 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:26:38.932341 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:26:38.932349 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:26:38.932356 kernel: efi: EFI v2.7 by EDK II Feb 13 15:26:38.932363 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 15:26:38.932371 kernel: random: crng init done Feb 13 15:26:38.932378 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 15:26:38.932385 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 15:26:38.932397 kernel: secureboot: Secure boot disabled Feb 13 15:26:38.932405 kernel: SMBIOS 2.8 present. Feb 13 15:26:38.932412 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 15:26:38.932419 kernel: Hypervisor detected: KVM Feb 13 15:26:38.932426 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:26:38.932433 kernel: kvm-clock: using sched offset of 3953836000 cycles Feb 13 15:26:38.932441 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:26:38.932449 kernel: tsc: Detected 2794.748 MHz processor Feb 13 15:26:38.932456 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:26:38.932464 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:26:38.932471 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 15:26:38.932482 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:26:38.932489 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:26:38.932496 kernel: Using GB pages for direct mapping Feb 13 15:26:38.932504 kernel: ACPI: Early table checksum verification disabled Feb 13 15:26:38.932511 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 15:26:38.932519 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:26:38.932526 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:38.932534 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:38.932541 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 15:26:38.932551 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:38.932558 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:38.932565 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:38.932573 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:38.932580 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:26:38.932587 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 15:26:38.932595 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 15:26:38.932602 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 15:26:38.932612 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 15:26:38.932619 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 15:26:38.932626 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 15:26:38.932633 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 15:26:38.932643 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 15:26:38.932650 kernel: No NUMA configuration found Feb 13 15:26:38.932658 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 15:26:38.932665 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 15:26:38.932674 kernel: Zone ranges: Feb 13 15:26:38.932684 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:26:38.932697 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 15:26:38.932707 kernel: Normal empty Feb 13 15:26:38.932721 kernel: Movable zone start for each node Feb 13 15:26:38.932730 kernel: Early memory node ranges Feb 13 15:26:38.932740 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:26:38.932766 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 15:26:38.932794 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 15:26:38.932807 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 15:26:38.932822 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 15:26:38.932835 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 15:26:38.932842 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 15:26:38.932849 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 15:26:38.932857 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 15:26:38.932864 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:26:38.932871 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:26:38.932886 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 15:26:38.932896 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:26:38.932904 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 15:26:38.932922 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 15:26:38.932930 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:26:38.932941 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 15:26:38.932952 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 15:26:38.932960 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:26:38.932967 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:26:38.932975 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:26:38.932982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:26:38.932992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:26:38.933010 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:26:38.933019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:26:38.933027 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:26:38.933034 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:26:38.933042 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:26:38.933049 kernel: TSC deadline timer available Feb 13 15:26:38.933057 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:26:38.933065 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:26:38.933075 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:26:38.933083 kernel: kvm-guest: setup PV sched yield Feb 13 15:26:38.933090 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 15:26:38.933098 kernel: Booting paravirtualized kernel on KVM Feb 13 15:26:38.933106 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:26:38.933114 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:26:38.933121 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:26:38.933143 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:26:38.933171 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:26:38.933191 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:26:38.933199 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:26:38.933208 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:26:38.933217 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:26:38.933224 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:26:38.933235 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:26:38.933243 kernel: Fallback order for Node 0: 0 Feb 13 15:26:38.933250 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 15:26:38.933261 kernel: Policy zone: DMA32 Feb 13 15:26:38.933269 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:26:38.933277 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Feb 13 15:26:38.933284 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:26:38.933292 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:26:38.933300 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:26:38.933307 kernel: Dynamic Preempt: voluntary Feb 13 15:26:38.933315 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:26:38.933329 kernel: rcu: RCU event tracing is enabled. Feb 13 15:26:38.933339 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:26:38.933348 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:26:38.933358 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:26:38.933366 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:26:38.933376 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:26:38.933383 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:26:38.933391 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:26:38.933399 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:26:38.933406 kernel: Console: colour dummy device 80x25 Feb 13 15:26:38.933417 kernel: printk: console [ttyS0] enabled Feb 13 15:26:38.933424 kernel: ACPI: Core revision 20230628 Feb 13 15:26:38.933432 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:26:38.933440 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:26:38.933447 kernel: x2apic enabled Feb 13 15:26:38.933455 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:26:38.933465 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:26:38.933473 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:26:38.933481 kernel: kvm-guest: setup PV IPIs Feb 13 15:26:38.933489 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:26:38.933499 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:26:38.933506 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 15:26:38.933514 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:26:38.933522 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:26:38.933530 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:26:38.933537 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:26:38.933545 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:26:38.933553 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:26:38.933563 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:26:38.933571 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:26:38.933579 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:26:38.933586 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:26:38.933594 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:26:38.933602 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:26:38.933610 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:26:38.933620 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:26:38.933628 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:26:38.933639 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:26:38.933646 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:26:38.933654 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:26:38.933662 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:26:38.933670 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:26:38.933683 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:26:38.933692 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:26:38.933699 kernel: landlock: Up and running. Feb 13 15:26:38.933710 kernel: SELinux: Initializing. Feb 13 15:26:38.933721 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:26:38.933729 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:26:38.933736 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:26:38.933744 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:26:38.933752 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:26:38.933760 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:26:38.933768 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:26:38.933775 kernel: ... version: 0 Feb 13 15:26:38.933785 kernel: ... bit width: 48 Feb 13 15:26:38.933793 kernel: ... generic registers: 6 Feb 13 15:26:38.933800 kernel: ... value mask: 0000ffffffffffff Feb 13 15:26:38.933808 kernel: ... max period: 00007fffffffffff Feb 13 15:26:38.933816 kernel: ... fixed-purpose events: 0 Feb 13 15:26:38.933823 kernel: ... event mask: 000000000000003f Feb 13 15:26:38.933831 kernel: signal: max sigframe size: 1776 Feb 13 15:26:38.933838 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:26:38.933846 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:26:38.933854 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:26:38.933864 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:26:38.933872 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:26:38.933879 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:26:38.933887 kernel: smpboot: Max logical packages: 1 Feb 13 15:26:38.933895 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 15:26:38.933902 kernel: devtmpfs: initialized Feb 13 15:26:38.933910 kernel: x86/mm: Memory block size: 128MB Feb 13 15:26:38.933917 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 15:26:38.933925 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 15:26:38.933944 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 15:26:38.933953 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 15:26:38.933961 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 15:26:38.933969 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 15:26:38.933976 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:26:38.933984 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:26:38.933992 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:26:38.933999 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:26:38.934010 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:26:38.934018 kernel: audit: type=2000 audit(1739460398.341:1): state=initialized audit_enabled=0 res=1 Feb 13 15:26:38.934026 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:26:38.934033 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:26:38.934041 kernel: cpuidle: using governor menu Feb 13 15:26:38.934048 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:26:38.934056 kernel: dca service started, version 1.12.1 Feb 13 15:26:38.934072 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 15:26:38.934080 kernel: PCI: Using configuration type 1 for base access Feb 13 15:26:38.934091 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:26:38.934099 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:26:38.934107 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:26:38.934117 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:26:38.934128 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:26:38.934150 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:26:38.934166 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:26:38.934173 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:26:38.934181 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:26:38.934188 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:26:38.934200 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:26:38.934207 kernel: ACPI: Interpreter enabled Feb 13 15:26:38.934215 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:26:38.934222 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:26:38.934230 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:26:38.934238 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:26:38.934246 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:26:38.934262 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:26:38.934536 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:26:38.934686 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:26:38.934815 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:26:38.934826 kernel: PCI host bridge to bus 0000:00 Feb 13 15:26:38.934968 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:26:38.936470 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:26:38.937694 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:26:38.937836 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 15:26:38.937968 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 15:26:38.938100 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:26:38.938266 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:26:38.938464 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:26:38.938625 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:26:38.938783 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 15:26:38.938912 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 15:26:38.939037 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 15:26:38.939188 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 15:26:38.939319 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:26:38.939488 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:26:38.939622 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 15:26:38.940889 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 15:26:38.941035 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 15:26:38.941203 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:26:38.941339 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 15:26:38.941467 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 15:26:38.941594 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 15:26:38.941733 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:26:38.941867 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 15:26:38.941995 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 15:26:38.942121 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 15:26:38.942287 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 15:26:38.942438 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:26:38.942565 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:26:38.942707 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:26:38.942841 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 15:26:38.942968 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 15:26:38.943108 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:26:38.943264 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 15:26:38.943276 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:26:38.943285 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:26:38.943295 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:26:38.943308 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:26:38.943315 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:26:38.943323 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:26:38.943331 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:26:38.943339 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:26:38.943346 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:26:38.943354 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:26:38.943362 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:26:38.943370 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:26:38.943380 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:26:38.943388 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:26:38.943395 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:26:38.943403 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:26:38.943411 kernel: iommu: Default domain type: Translated Feb 13 15:26:38.943419 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:26:38.943426 kernel: efivars: Registered efivars operations Feb 13 15:26:38.943434 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:26:38.943442 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:26:38.943452 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 15:26:38.943460 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 15:26:38.943467 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 15:26:38.943475 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 15:26:38.943483 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 15:26:38.943490 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 15:26:38.943498 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 15:26:38.943505 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 15:26:38.943636 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:26:38.943763 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:26:38.943889 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:26:38.943900 kernel: vgaarb: loaded Feb 13 15:26:38.943908 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:26:38.943915 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:26:38.943923 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:26:38.943931 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:26:38.943939 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:26:38.943951 kernel: pnp: PnP ACPI init Feb 13 15:26:38.944100 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 15:26:38.944112 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:26:38.944120 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:26:38.944128 kernel: NET: Registered PF_INET protocol family Feb 13 15:26:38.944176 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:26:38.944187 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:26:38.944196 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:26:38.944206 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:26:38.944214 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:26:38.944222 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:26:38.944230 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:26:38.944238 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:26:38.944246 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:26:38.944254 kernel: NET: Registered PF_XDP protocol family Feb 13 15:26:38.944388 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 15:26:38.944520 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 15:26:38.944640 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:26:38.944757 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:26:38.944872 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:26:38.944987 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 15:26:38.945103 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 15:26:38.945292 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:26:38.945305 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:26:38.945318 kernel: Initialise system trusted keyrings Feb 13 15:26:38.945329 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:26:38.945338 kernel: Key type asymmetric registered Feb 13 15:26:38.945348 kernel: Asymmetric key parser 'x509' registered Feb 13 15:26:38.945356 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:26:38.945365 kernel: io scheduler mq-deadline registered Feb 13 15:26:38.945375 kernel: io scheduler kyber registered Feb 13 15:26:38.945384 kernel: io scheduler bfq registered Feb 13 15:26:38.945394 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:26:38.945403 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:26:38.945414 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:26:38.945425 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:26:38.945433 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:26:38.945442 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:26:38.945450 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:26:38.945461 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:26:38.945469 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:26:38.945631 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:26:38.945644 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:26:38.945762 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:26:38.945883 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:26:38 UTC (1739460398) Feb 13 15:26:38.946000 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 15:26:38.946012 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:26:38.946023 kernel: efifb: probing for efifb Feb 13 15:26:38.946032 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 15:26:38.946040 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 15:26:38.946048 kernel: efifb: scrolling: redraw Feb 13 15:26:38.946056 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:26:38.946065 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:26:38.946073 kernel: fb0: EFI VGA frame buffer device Feb 13 15:26:38.946081 kernel: pstore: Using crash dump compression: deflate Feb 13 15:26:38.946089 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:26:38.946100 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:26:38.946108 kernel: Segment Routing with IPv6 Feb 13 15:26:38.946116 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:26:38.946124 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:26:38.946145 kernel: Key type dns_resolver registered Feb 13 15:26:38.946154 kernel: IPI shorthand broadcast: enabled Feb 13 15:26:38.946169 kernel: sched_clock: Marking stable (1214003155, 163414994)->(1451608204, -74190055) Feb 13 15:26:38.946177 kernel: registered taskstats version 1 Feb 13 15:26:38.946185 kernel: Loading compiled-in X.509 certificates Feb 13 15:26:38.946196 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:26:38.946204 kernel: Key type .fscrypt registered Feb 13 15:26:38.946212 kernel: Key type fscrypt-provisioning registered Feb 13 15:26:38.946221 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:26:38.946229 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:26:38.946237 kernel: ima: No architecture policies found Feb 13 15:26:38.946245 kernel: clk: Disabling unused clocks Feb 13 15:26:38.946254 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:26:38.946262 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:26:38.946273 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:26:38.946281 kernel: Run /init as init process Feb 13 15:26:38.946289 kernel: with arguments: Feb 13 15:26:38.946299 kernel: /init Feb 13 15:26:38.946308 kernel: with environment: Feb 13 15:26:38.946316 kernel: HOME=/ Feb 13 15:26:38.946323 kernel: TERM=linux Feb 13 15:26:38.946331 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:26:38.946342 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:26:38.946355 systemd[1]: Detected virtualization kvm. Feb 13 15:26:38.946364 systemd[1]: Detected architecture x86-64. Feb 13 15:26:38.946372 systemd[1]: Running in initrd. Feb 13 15:26:38.946380 systemd[1]: No hostname configured, using default hostname. Feb 13 15:26:38.946389 systemd[1]: Hostname set to . Feb 13 15:26:38.946398 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:26:38.946406 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:26:38.946418 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:26:38.946427 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:26:38.946436 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:26:38.946445 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:26:38.946453 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:26:38.946463 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:26:38.947612 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:26:38.947627 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:26:38.947635 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:26:38.947644 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:26:38.947652 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:26:38.947661 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:26:38.947669 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:26:38.947678 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:26:38.947686 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:26:38.947698 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:26:38.947718 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:26:38.947735 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:26:38.947759 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:26:38.947777 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:26:38.947786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:26:38.947795 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:26:38.947819 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:26:38.947828 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:26:38.947840 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:26:38.947849 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:26:38.947857 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:26:38.947866 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:26:38.947875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:38.947884 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:26:38.947893 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:26:38.947902 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:26:38.947936 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 15:26:38.947959 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:26:38.947969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:38.947978 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:26:38.947987 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:26:38.947996 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:26:38.948006 systemd-journald[194]: Journal started Feb 13 15:26:38.948026 systemd-journald[194]: Runtime Journal (/run/log/journal/606c327a1dd74fcc8d28c78deabd39c8) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:26:38.924572 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:26:38.950908 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:26:38.951846 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:26:38.957278 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:26:38.960146 kernel: Bridge firewalling registered Feb 13 15:26:38.960113 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:26:38.967431 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:26:38.970081 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:26:38.970847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:26:38.980324 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:26:38.982378 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:26:38.984894 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:26:38.995484 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:26:38.998550 dracut-cmdline[224]: dracut-dracut-053 Feb 13 15:26:39.004630 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:26:39.004301 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:26:39.042702 systemd-resolved[237]: Positive Trust Anchors: Feb 13 15:26:39.042718 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:26:39.042750 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:26:39.045409 systemd-resolved[237]: Defaulting to hostname 'linux'. Feb 13 15:26:39.046843 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:26:39.052664 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:26:39.107175 kernel: SCSI subsystem initialized Feb 13 15:26:39.117170 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:26:39.127170 kernel: iscsi: registered transport (tcp) Feb 13 15:26:39.149260 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:26:39.149288 kernel: QLogic iSCSI HBA Driver Feb 13 15:26:39.212604 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:26:39.222282 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:26:39.248171 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:26:39.248211 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:26:39.249693 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:26:39.292168 kernel: raid6: avx2x4 gen() 30111 MB/s Feb 13 15:26:39.309162 kernel: raid6: avx2x2 gen() 30451 MB/s Feb 13 15:26:39.326252 kernel: raid6: avx2x1 gen() 25803 MB/s Feb 13 15:26:39.326280 kernel: raid6: using algorithm avx2x2 gen() 30451 MB/s Feb 13 15:26:39.344251 kernel: raid6: .... xor() 19695 MB/s, rmw enabled Feb 13 15:26:39.344288 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:26:39.365178 kernel: xor: automatically using best checksumming function avx Feb 13 15:26:39.525194 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:26:39.542956 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:26:39.555333 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:26:39.573390 systemd-udevd[415]: Using default interface naming scheme 'v255'. Feb 13 15:26:39.579318 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:26:39.587337 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:26:39.604343 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Feb 13 15:26:39.644837 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:26:39.655326 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:26:39.735954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:26:39.744526 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:26:39.760335 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:26:39.763992 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:26:39.765708 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:26:39.769238 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:26:39.779419 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:26:39.787759 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:26:39.806858 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:26:39.808293 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:26:39.808322 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:26:39.808343 kernel: GPT:9289727 != 19775487 Feb 13 15:26:39.808363 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:26:39.808380 kernel: GPT:9289727 != 19775487 Feb 13 15:26:39.808400 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:26:39.808416 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:26:39.792505 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:26:39.804370 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:26:39.804489 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:26:39.806881 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:26:39.808320 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:26:39.817458 kernel: libata version 3.00 loaded. Feb 13 15:26:39.808625 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:39.810812 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:39.820071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:39.833066 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:26:39.833106 kernel: AES CTR mode by8 optimization enabled Feb 13 15:26:39.833117 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:26:39.860939 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:26:39.860975 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:26:39.861245 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:26:39.861427 kernel: scsi host0: ahci Feb 13 15:26:39.861624 kernel: scsi host1: ahci Feb 13 15:26:39.861811 kernel: scsi host2: ahci Feb 13 15:26:39.862057 kernel: scsi host3: ahci Feb 13 15:26:39.862273 kernel: scsi host4: ahci Feb 13 15:26:39.862454 kernel: scsi host5: ahci Feb 13 15:26:39.862631 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (471) Feb 13 15:26:39.862646 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 15:26:39.862659 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 15:26:39.862673 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (459) Feb 13 15:26:39.862686 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 15:26:39.862700 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 15:26:39.862717 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 15:26:39.862731 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 15:26:39.830914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:26:39.831068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:39.845534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:39.867340 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:26:39.872846 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:26:39.877714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:39.887080 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:26:39.887553 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:26:39.893956 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:26:39.909481 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:26:39.911000 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:26:39.938175 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:26:39.943203 disk-uuid[561]: Primary Header is updated. Feb 13 15:26:39.943203 disk-uuid[561]: Secondary Entries is updated. Feb 13 15:26:39.943203 disk-uuid[561]: Secondary Header is updated. Feb 13 15:26:39.948215 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:26:39.952173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:26:40.168454 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:26:40.168538 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:26:40.168550 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:26:40.170780 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:26:40.170804 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:26:40.171177 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:26:40.172348 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:26:40.172362 kernel: ata3.00: applying bridge limits Feb 13 15:26:40.173431 kernel: ata3.00: configured for UDMA/100 Feb 13 15:26:40.174167 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:26:40.225789 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:26:40.237910 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:26:40.237938 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:26:40.954176 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:26:40.954624 disk-uuid[570]: The operation has completed successfully. Feb 13 15:26:40.984416 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:26:40.984549 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:26:41.004373 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:26:41.009857 sh[597]: Success Feb 13 15:26:41.023157 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:26:41.057791 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:26:41.073785 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:26:41.078589 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:26:41.090361 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:26:41.090389 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:26:41.090401 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:26:41.091391 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:26:41.092739 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:26:41.096975 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:26:41.098120 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:26:41.111274 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:26:41.112350 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:26:41.126641 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:26:41.126699 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:26:41.126711 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:26:41.129176 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:26:41.139034 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:26:41.141164 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:26:41.152226 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:26:41.158320 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:26:41.278402 ignition[695]: Ignition 2.20.0 Feb 13 15:26:41.278415 ignition[695]: Stage: fetch-offline Feb 13 15:26:41.278459 ignition[695]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:41.278469 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:41.278561 ignition[695]: parsed url from cmdline: "" Feb 13 15:26:41.278566 ignition[695]: no config URL provided Feb 13 15:26:41.278571 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:26:41.282652 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:26:41.278581 ignition[695]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:26:41.278613 ignition[695]: op(1): [started] loading QEMU firmware config module Feb 13 15:26:41.278619 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:26:41.286154 ignition[695]: op(1): [finished] loading QEMU firmware config module Feb 13 15:26:41.290280 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:26:41.315986 systemd-networkd[785]: lo: Link UP Feb 13 15:26:41.315997 systemd-networkd[785]: lo: Gained carrier Feb 13 15:26:41.317936 systemd-networkd[785]: Enumeration completed Feb 13 15:26:41.318207 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:26:41.318503 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:26:41.318507 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:26:41.319591 systemd-networkd[785]: eth0: Link UP Feb 13 15:26:41.319595 systemd-networkd[785]: eth0: Gained carrier Feb 13 15:26:41.319603 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:26:41.322265 systemd[1]: Reached target network.target - Network. Feb 13 15:26:41.338184 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:26:41.349231 ignition[695]: parsing config with SHA512: 2701c35c2032bd45c6a2088a19fd3c002dbf5491348e84e084f267b6b556ee873698117f82002903cdf8a4f5d05e4ad132a91a9931d9cf45d00cf80167fed1e0 Feb 13 15:26:41.353435 unknown[695]: fetched base config from "system" Feb 13 15:26:41.353771 ignition[695]: fetch-offline: fetch-offline passed Feb 13 15:26:41.353451 unknown[695]: fetched user config from "qemu" Feb 13 15:26:41.353838 ignition[695]: Ignition finished successfully Feb 13 15:26:41.357021 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:26:41.357960 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:26:41.363398 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:26:41.378186 ignition[789]: Ignition 2.20.0 Feb 13 15:26:41.378199 ignition[789]: Stage: kargs Feb 13 15:26:41.378374 ignition[789]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:41.378386 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:41.379216 ignition[789]: kargs: kargs passed Feb 13 15:26:41.379266 ignition[789]: Ignition finished successfully Feb 13 15:26:41.382647 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:26:41.398276 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:26:41.410651 ignition[797]: Ignition 2.20.0 Feb 13 15:26:41.410666 ignition[797]: Stage: disks Feb 13 15:26:41.410825 ignition[797]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:41.410837 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:41.411683 ignition[797]: disks: disks passed Feb 13 15:26:41.414036 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:26:41.411731 ignition[797]: Ignition finished successfully Feb 13 15:26:41.415561 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:26:41.417106 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:26:41.419316 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:26:41.420368 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:26:41.422119 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:26:41.432275 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:26:41.444703 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:26:41.451106 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:26:41.455339 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:26:41.546250 kernel: EXT4-fs (vda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:26:41.546909 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:26:41.549505 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:26:41.567261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:26:41.569962 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:26:41.572347 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:26:41.572396 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:26:41.581831 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Feb 13 15:26:41.581848 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:26:41.581860 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:26:41.581870 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:26:41.572421 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:26:41.584502 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:26:41.586341 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:26:41.587404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:26:41.598311 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:26:41.630024 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:26:41.635564 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:26:41.640433 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:26:41.645885 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:26:41.739739 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:26:41.750280 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:26:41.753912 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:26:41.759162 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:26:41.779805 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:26:41.783291 ignition[929]: INFO : Ignition 2.20.0 Feb 13 15:26:41.783291 ignition[929]: INFO : Stage: mount Feb 13 15:26:41.785034 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:41.785034 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:41.785034 ignition[929]: INFO : mount: mount passed Feb 13 15:26:41.785034 ignition[929]: INFO : Ignition finished successfully Feb 13 15:26:41.790781 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:26:41.803225 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:26:42.089911 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:26:42.107301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:26:42.115069 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (943) Feb 13 15:26:42.115109 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:26:42.115150 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:26:42.116600 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:26:42.119165 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:26:42.120753 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:26:42.150628 ignition[960]: INFO : Ignition 2.20.0 Feb 13 15:26:42.150628 ignition[960]: INFO : Stage: files Feb 13 15:26:42.152443 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:42.152443 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:42.155310 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:26:42.156609 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:26:42.156609 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:26:42.160244 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:26:42.161782 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:26:42.161782 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:26:42.160832 unknown[960]: wrote ssh authorized keys file for user: core Feb 13 15:26:42.166056 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:26:42.166056 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:26:42.200965 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:26:42.311454 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:26:42.313696 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:26:42.790663 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:26:42.985315 systemd-networkd[785]: eth0: Gained IPv6LL Feb 13 15:26:43.147745 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:26:43.147745 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:26:43.151614 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:26:43.153843 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:26:43.153843 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:26:43.153843 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:26:43.158440 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:26:43.158440 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:26:43.158440 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:26:43.158440 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:26:43.183330 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:26:43.190713 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:26:43.192376 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:26:43.192376 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:26:43.192376 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:26:43.192376 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:26:43.192376 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:26:43.192376 ignition[960]: INFO : files: files passed Feb 13 15:26:43.192376 ignition[960]: INFO : Ignition finished successfully Feb 13 15:26:43.194152 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:26:43.204298 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:26:43.206296 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:26:43.208603 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:26:43.208721 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:26:43.217368 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:26:43.220184 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:26:43.220184 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:26:43.224506 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:26:43.223109 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:26:43.224702 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:26:43.239304 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:26:43.265892 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:26:43.266061 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:26:43.266784 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:26:43.271280 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:26:43.271761 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:26:43.274724 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:26:43.297466 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:26:43.307286 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:26:43.318487 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:26:43.319840 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:26:43.320545 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:26:43.320890 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:26:43.355806 ignition[1015]: INFO : Ignition 2.20.0 Feb 13 15:26:43.355806 ignition[1015]: INFO : Stage: umount Feb 13 15:26:43.355806 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:43.355806 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:43.355806 ignition[1015]: INFO : umount: umount passed Feb 13 15:26:43.355806 ignition[1015]: INFO : Ignition finished successfully Feb 13 15:26:43.321023 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:26:43.321760 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:26:43.322100 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:26:43.322445 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:26:43.322811 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:26:43.323190 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:26:43.323513 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:26:43.323862 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:26:43.324192 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:26:43.324500 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:26:43.324827 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:26:43.325164 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:26:43.325286 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:26:43.325969 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:26:43.326538 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:26:43.326832 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:26:43.326961 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:26:43.327514 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:26:43.327621 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:26:43.328357 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:26:43.328467 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:26:43.328924 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:26:43.329192 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:26:43.333255 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:26:43.333696 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:26:43.334194 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:26:43.334682 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:26:43.334836 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:26:43.335225 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:26:43.335366 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:26:43.335831 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:26:43.335961 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:26:43.336491 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:26:43.336609 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:26:43.338027 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:26:43.339216 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:26:43.339525 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:26:43.339639 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:26:43.339932 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:26:43.340031 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:26:43.343579 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:26:43.343694 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:26:43.358002 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:26:43.358164 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:26:43.359772 systemd[1]: Stopped target network.target - Network. Feb 13 15:26:43.361604 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:26:43.361665 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:26:43.363394 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:26:43.363445 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:26:43.365253 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:26:43.365302 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:26:43.367535 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:26:43.367606 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:26:43.369642 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:26:43.371800 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:26:43.373210 systemd-networkd[785]: eth0: DHCPv6 lease lost Feb 13 15:26:43.374917 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:26:43.375461 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:26:43.375588 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:26:43.378019 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:26:43.378124 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:26:43.388326 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:26:43.391178 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:26:43.391241 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:26:43.392972 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:26:43.397022 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:26:43.397170 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:26:43.408571 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:26:43.408675 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:26:43.409940 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:26:43.410024 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:26:43.410590 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:26:43.410658 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:26:43.414682 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:26:43.414881 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:26:43.425957 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:26:43.426268 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:26:43.426985 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:26:43.427069 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:26:43.429707 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:26:43.429768 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:26:43.432420 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:26:43.432499 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:26:43.433155 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:26:43.433215 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:26:43.433908 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:26:43.433958 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:26:43.445453 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:26:43.446352 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:26:43.446438 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:26:43.446733 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:26:43.446800 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:26:43.447058 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:26:43.447127 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:26:43.447552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:26:43.447626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:43.454361 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:26:43.454543 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:26:43.549259 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:26:43.549398 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:26:43.551407 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:26:43.553072 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:26:43.553145 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:26:43.564284 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:26:43.571518 systemd[1]: Switching root. Feb 13 15:26:43.603155 systemd-journald[194]: Journal stopped Feb 13 15:26:44.737367 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 15:26:44.737446 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:26:44.737461 kernel: SELinux: policy capability open_perms=1 Feb 13 15:26:44.737473 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:26:44.737486 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:26:44.737497 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:26:44.737521 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:26:44.737533 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:26:44.737546 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:26:44.737558 kernel: audit: type=1403 audit(1739460403.969:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:26:44.737571 systemd[1]: Successfully loaded SELinux policy in 39.856ms. Feb 13 15:26:44.737599 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.894ms. Feb 13 15:26:44.737613 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:26:44.737625 systemd[1]: Detected virtualization kvm. Feb 13 15:26:44.737638 systemd[1]: Detected architecture x86-64. Feb 13 15:26:44.737654 systemd[1]: Detected first boot. Feb 13 15:26:44.737666 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:26:44.737683 zram_generator::config[1059]: No configuration found. Feb 13 15:26:44.737697 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:26:44.737710 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:26:44.737722 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:26:44.737735 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:26:44.737748 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:26:44.737763 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:26:44.737776 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:26:44.737788 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:26:44.737800 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:26:44.737813 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:26:44.737826 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:26:44.737839 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:26:44.737852 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:26:44.737864 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:26:44.737880 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:26:44.737892 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:26:44.737904 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:26:44.737917 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:26:44.737929 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:26:44.737941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:26:44.737954 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:26:44.737972 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:26:44.737987 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:26:44.737999 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:26:44.738012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:26:44.738035 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:26:44.738049 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:26:44.738061 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:26:44.738074 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:26:44.738086 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:26:44.738101 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:26:44.738114 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:26:44.738127 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:26:44.738152 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:26:44.738165 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:26:44.738178 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:26:44.738196 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:26:44.738208 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:26:44.738221 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:26:44.738236 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:26:44.738248 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:26:44.738261 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:26:44.738273 systemd[1]: Reached target machines.target - Containers. Feb 13 15:26:44.738285 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:26:44.738298 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:26:44.738310 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:26:44.738323 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:26:44.739084 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:26:44.739105 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:26:44.739118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:26:44.739147 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:26:44.739161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:26:44.739174 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:26:44.739188 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:26:44.739201 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:26:44.739213 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:26:44.739228 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:26:44.739240 kernel: fuse: init (API version 7.39) Feb 13 15:26:44.739252 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:26:44.739264 kernel: loop: module loaded Feb 13 15:26:44.739276 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:26:44.739288 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:26:44.739301 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:26:44.739334 systemd-journald[1129]: Collecting audit messages is disabled. Feb 13 15:26:44.739360 kernel: ACPI: bus type drm_connector registered Feb 13 15:26:44.739373 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:26:44.739385 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:26:44.739397 systemd[1]: Stopped verity-setup.service. Feb 13 15:26:44.739411 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:26:44.739423 systemd-journald[1129]: Journal started Feb 13 15:26:44.739448 systemd-journald[1129]: Runtime Journal (/run/log/journal/606c327a1dd74fcc8d28c78deabd39c8) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:26:44.507198 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:26:44.525009 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:26:44.525519 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:26:44.744923 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:26:44.745698 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:26:44.746885 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:26:44.748116 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:26:44.749242 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:26:44.750510 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:26:44.751782 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:26:44.753095 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:26:44.754656 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:26:44.756279 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:26:44.756462 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:26:44.757993 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:26:44.758202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:26:44.759665 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:26:44.759846 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:26:44.761235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:26:44.761418 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:26:44.762936 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:26:44.763122 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:26:44.764776 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:26:44.764954 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:26:44.766381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:26:44.767838 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:26:44.769420 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:26:44.785216 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:26:44.793225 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:26:44.795621 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:26:44.796799 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:26:44.796830 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:26:44.798898 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:26:44.802296 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:26:44.804639 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:26:44.805797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:26:44.808827 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:26:44.814341 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:26:44.815720 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:26:44.818190 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:26:44.823316 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:26:44.824310 systemd-journald[1129]: Time spent on flushing to /var/log/journal/606c327a1dd74fcc8d28c78deabd39c8 is 14.005ms for 1040 entries. Feb 13 15:26:44.824310 systemd-journald[1129]: System Journal (/var/log/journal/606c327a1dd74fcc8d28c78deabd39c8) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:26:44.959146 systemd-journald[1129]: Received client request to flush runtime journal. Feb 13 15:26:44.959193 kernel: loop0: detected capacity change from 0 to 211296 Feb 13 15:26:44.959208 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:26:44.959220 kernel: loop1: detected capacity change from 0 to 140992 Feb 13 15:26:44.825178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:26:44.829693 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:26:44.832472 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:26:44.840852 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:26:44.842366 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:26:44.843689 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:26:44.845213 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:26:44.860364 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:26:44.870770 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:26:44.879298 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:26:44.884306 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Feb 13 15:26:44.884320 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Feb 13 15:26:44.890767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:26:44.898418 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:26:44.945957 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:26:44.959400 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:26:44.961903 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:26:44.971707 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:26:44.972422 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 15:26:44.975910 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:26:44.979167 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 15:26:44.979532 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 15:26:44.989398 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:26:44.991428 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:26:45.012087 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:26:45.013153 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:26:45.025166 kernel: loop3: detected capacity change from 0 to 211296 Feb 13 15:26:45.034158 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 15:26:45.046152 kernel: loop5: detected capacity change from 0 to 138184 Feb 13 15:26:45.057048 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:26:45.057686 (sd-merge)[1201]: Merged extensions into '/usr'. Feb 13 15:26:45.062536 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:26:45.062650 systemd[1]: Reloading... Feb 13 15:26:45.117161 zram_generator::config[1226]: No configuration found. Feb 13 15:26:45.225868 ldconfig[1168]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:26:45.260660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:26:45.310820 systemd[1]: Reloading finished in 247 ms. Feb 13 15:26:45.352524 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:26:45.354264 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:26:45.372572 systemd[1]: Starting ensure-sysext.service... Feb 13 15:26:45.375379 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:26:45.395285 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:26:45.395315 systemd[1]: Reloading... Feb 13 15:26:45.416123 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:26:45.416631 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:26:45.417818 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:26:45.418154 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Feb 13 15:26:45.418243 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Feb 13 15:26:45.452163 zram_generator::config[1291]: No configuration found. Feb 13 15:26:45.450830 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:26:45.450850 systemd-tmpfiles[1265]: Skipping /boot Feb 13 15:26:45.463186 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:26:45.463368 systemd-tmpfiles[1265]: Skipping /boot Feb 13 15:26:45.565740 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:26:45.615508 systemd[1]: Reloading finished in 219 ms. Feb 13 15:26:45.633569 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:26:45.646908 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:26:45.655893 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:26:45.658476 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:26:45.661124 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:26:45.666495 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:26:45.670464 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:26:45.673016 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:26:45.676251 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:26:45.676423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:26:45.685486 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:26:45.692410 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:26:45.695799 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:26:45.696198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:26:45.702219 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:26:45.703272 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:26:45.704625 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:26:45.706637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:26:45.706839 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:26:45.708884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:26:45.709119 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:26:45.713594 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:26:45.713837 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:26:45.714748 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Feb 13 15:26:45.724645 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:26:45.728940 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:26:45.729222 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:26:45.736467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:26:45.739833 augenrules[1365]: No rules Feb 13 15:26:45.740408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:26:45.743975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:26:45.745219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:26:45.751280 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:26:45.752540 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:26:45.753544 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:26:45.755720 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:26:45.755965 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:26:45.757539 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:26:45.760678 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:26:45.763329 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:26:45.767803 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:26:45.770562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:26:45.770764 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:26:45.772738 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:26:45.772925 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:26:45.784441 systemd[1]: Finished ensure-sysext.service. Feb 13 15:26:45.791562 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:26:45.800316 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:26:45.802315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:26:45.807342 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:26:45.809162 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1386) Feb 13 15:26:45.818736 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:26:45.827480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:26:45.831800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:26:45.833486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:26:45.844364 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:26:45.847659 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:26:45.849163 augenrules[1401]: /sbin/augenrules: No change Feb 13 15:26:45.849110 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:26:45.849157 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:26:45.849885 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:26:45.851393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:26:45.851587 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:26:45.856567 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:26:45.856808 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:26:45.863901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:26:45.864108 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:26:45.867839 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:26:45.868071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:26:45.874082 augenrules[1433]: No rules Feb 13 15:26:45.876958 systemd-resolved[1333]: Positive Trust Anchors: Feb 13 15:26:45.876969 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:26:45.877009 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:26:45.881999 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:26:45.882781 systemd-resolved[1333]: Defaulting to hostname 'linux'. Feb 13 15:26:45.883303 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:26:45.885752 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:26:45.890098 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:26:45.900155 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:26:45.902671 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:26:45.904102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:26:45.905446 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:26:45.919312 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:26:45.920669 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:26:45.920766 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:26:45.940170 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:26:45.942552 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:26:45.943177 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 15:26:45.944390 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:26:45.944567 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:26:45.953724 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:26:45.952681 systemd-networkd[1418]: lo: Link UP Feb 13 15:26:45.952687 systemd-networkd[1418]: lo: Gained carrier Feb 13 15:26:45.954462 systemd-networkd[1418]: Enumeration completed Feb 13 15:26:45.955631 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:26:45.956253 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:26:45.956259 systemd-networkd[1418]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:26:45.957352 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:26:45.957866 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:26:45.957905 systemd-networkd[1418]: eth0: Link UP Feb 13 15:26:45.957911 systemd-networkd[1418]: eth0: Gained carrier Feb 13 15:26:45.957924 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:26:45.958723 systemd[1]: Reached target network.target - Network. Feb 13 15:26:45.959866 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:26:45.969204 systemd-networkd[1418]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:26:45.970237 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Feb 13 15:26:46.940037 systemd-resolved[1333]: Clock change detected. Flushing caches. Feb 13 15:26:46.940162 systemd-timesyncd[1422]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:26:46.940229 systemd-timesyncd[1422]: Initial clock synchronization to Thu 2025-02-13 15:26:46.939993 UTC. Feb 13 15:26:46.941580 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:26:46.975717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:46.979604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:26:46.979822 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:46.983425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:47.027784 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:26:47.040604 kernel: kvm_amd: TSC scaling supported Feb 13 15:26:47.040694 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:26:47.040713 kernel: kvm_amd: Nested Paging enabled Feb 13 15:26:47.040729 kernel: kvm_amd: LBR virtualization supported Feb 13 15:26:47.042040 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:26:47.042116 kernel: kvm_amd: Virtual GIF supported Feb 13 15:26:47.066391 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:26:47.074988 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:47.105387 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:26:47.123780 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:26:47.131505 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:26:47.168019 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:26:47.169712 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:26:47.170938 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:26:47.172125 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:26:47.173526 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:26:47.175272 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:26:47.176538 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:26:47.177862 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:26:47.179224 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:26:47.179271 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:26:47.180253 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:26:47.182222 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:26:47.185448 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:26:47.197489 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:26:47.200192 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:26:47.201879 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:26:47.203255 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:26:47.204375 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:26:47.205481 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:26:47.205508 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:26:47.206666 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:26:47.209020 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:26:47.211452 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:26:47.214774 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:26:47.219203 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:26:47.220291 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:26:47.223747 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:26:47.230387 jq[1470]: false Feb 13 15:26:47.231551 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:26:47.235649 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:26:47.244646 extend-filesystems[1471]: Found loop3 Feb 13 15:26:47.245629 extend-filesystems[1471]: Found loop4 Feb 13 15:26:47.245629 extend-filesystems[1471]: Found loop5 Feb 13 15:26:47.245629 extend-filesystems[1471]: Found sr0 Feb 13 15:26:47.245629 extend-filesystems[1471]: Found vda Feb 13 15:26:47.245629 extend-filesystems[1471]: Found vda1 Feb 13 15:26:47.245629 extend-filesystems[1471]: Found vda2 Feb 13 15:26:47.245629 extend-filesystems[1471]: Found vda3 Feb 13 15:26:47.245629 extend-filesystems[1471]: Found usr Feb 13 15:26:47.245629 extend-filesystems[1471]: Found vda4 Feb 13 15:26:47.245629 extend-filesystems[1471]: Found vda6 Feb 13 15:26:47.245629 extend-filesystems[1471]: Found vda7 Feb 13 15:26:47.245629 extend-filesystems[1471]: Found vda9 Feb 13 15:26:47.245629 extend-filesystems[1471]: Checking size of /dev/vda9 Feb 13 15:26:47.245186 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:26:47.260511 dbus-daemon[1469]: [system] SELinux support is enabled Feb 13 15:26:47.254600 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:26:47.256899 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:26:47.257632 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:26:47.261708 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:26:47.264219 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:26:47.266327 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:26:47.267876 extend-filesystems[1471]: Resized partition /dev/vda9 Feb 13 15:26:47.270144 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:26:47.274507 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:26:47.275889 jq[1490]: true Feb 13 15:26:47.278552 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1400) Feb 13 15:26:47.275761 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:26:47.275980 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:26:47.276335 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:26:47.276606 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:26:47.281762 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:26:47.282235 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:26:47.283524 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:26:47.305915 update_engine[1486]: I20250213 15:26:47.305826 1486 main.cc:92] Flatcar Update Engine starting Feb 13 15:26:47.337509 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:26:47.315781 (ntainerd)[1504]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:26:47.337833 update_engine[1486]: I20250213 15:26:47.308274 1486 update_check_scheduler.cc:74] Next update check in 8m19s Feb 13 15:26:47.322255 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:26:47.337922 jq[1495]: true Feb 13 15:26:47.326854 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:26:47.326879 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:26:47.338400 extend-filesystems[1493]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:26:47.338400 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:26:47.338400 extend-filesystems[1493]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:26:47.328369 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:26:47.343925 extend-filesystems[1471]: Resized filesystem in /dev/vda9 Feb 13 15:26:47.328385 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:26:47.331317 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:26:47.343812 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:26:47.345040 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:26:47.348465 tar[1494]: linux-amd64/helm Feb 13 15:26:47.388059 bash[1525]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:26:47.388618 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:26:47.391627 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:26:47.393849 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:26:47.395265 systemd-logind[1483]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:26:47.395297 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:26:47.395569 systemd-logind[1483]: New seat seat0. Feb 13 15:26:47.396341 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:26:47.452901 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:26:47.479677 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:26:47.486662 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:26:47.498325 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:26:47.498758 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:26:47.507076 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:26:47.518503 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:26:47.520685 containerd[1504]: time="2025-02-13T15:26:47.520562092Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:26:47.529663 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:26:47.532484 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:26:47.533963 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:26:47.546035 containerd[1504]: time="2025-02-13T15:26:47.545823807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:47.548902 containerd[1504]: time="2025-02-13T15:26:47.547538332Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:26:47.548902 containerd[1504]: time="2025-02-13T15:26:47.547566305Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:26:47.548902 containerd[1504]: time="2025-02-13T15:26:47.547581924Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:26:47.548902 containerd[1504]: time="2025-02-13T15:26:47.547763735Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:26:47.548902 containerd[1504]: time="2025-02-13T15:26:47.547781018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:47.548902 containerd[1504]: time="2025-02-13T15:26:47.547850137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:26:47.548902 containerd[1504]: time="2025-02-13T15:26:47.547862250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:47.548902 containerd[1504]: time="2025-02-13T15:26:47.548058658Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:26:47.548902 containerd[1504]: time="2025-02-13T15:26:47.548071943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:47.548902 containerd[1504]: time="2025-02-13T15:26:47.548085238Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:26:47.548902 containerd[1504]: time="2025-02-13T15:26:47.548094435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:47.549138 containerd[1504]: time="2025-02-13T15:26:47.548196747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:47.549138 containerd[1504]: time="2025-02-13T15:26:47.548451144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:47.549138 containerd[1504]: time="2025-02-13T15:26:47.548570949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:26:47.549138 containerd[1504]: time="2025-02-13T15:26:47.548584394Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:26:47.549138 containerd[1504]: time="2025-02-13T15:26:47.548686135Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:26:47.549138 containerd[1504]: time="2025-02-13T15:26:47.548744755Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:26:47.554483 containerd[1504]: time="2025-02-13T15:26:47.554413676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:26:47.554528 containerd[1504]: time="2025-02-13T15:26:47.554492684Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:26:47.554528 containerd[1504]: time="2025-02-13T15:26:47.554512261Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:26:47.554565 containerd[1504]: time="2025-02-13T15:26:47.554530385Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:26:47.554565 containerd[1504]: time="2025-02-13T15:26:47.554544812Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:26:47.554750 containerd[1504]: time="2025-02-13T15:26:47.554719379Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:26:47.555010 containerd[1504]: time="2025-02-13T15:26:47.554970520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:26:47.555138 containerd[1504]: time="2025-02-13T15:26:47.555114280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:26:47.555167 containerd[1504]: time="2025-02-13T15:26:47.555137964Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:26:47.555167 containerd[1504]: time="2025-02-13T15:26:47.555153213Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:26:47.555216 containerd[1504]: time="2025-02-13T15:26:47.555169083Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:26:47.555216 containerd[1504]: time="2025-02-13T15:26:47.555183490Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:26:47.555216 containerd[1504]: time="2025-02-13T15:26:47.555209078Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:26:47.555277 containerd[1504]: time="2025-02-13T15:26:47.555224286Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:26:47.555277 containerd[1504]: time="2025-02-13T15:26:47.555239555Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:26:47.555277 containerd[1504]: time="2025-02-13T15:26:47.555253161Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:26:47.555277 containerd[1504]: time="2025-02-13T15:26:47.555267147Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:26:47.555344 containerd[1504]: time="2025-02-13T15:26:47.555278979Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:26:47.555344 containerd[1504]: time="2025-02-13T15:26:47.555300008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555344 containerd[1504]: time="2025-02-13T15:26:47.555313303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555344 containerd[1504]: time="2025-02-13T15:26:47.555325837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555344 containerd[1504]: time="2025-02-13T15:26:47.555338561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555470 containerd[1504]: time="2025-02-13T15:26:47.555366874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555470 containerd[1504]: time="2025-02-13T15:26:47.555382122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555470 containerd[1504]: time="2025-02-13T15:26:47.555394746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555470 containerd[1504]: time="2025-02-13T15:26:47.555409794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555470 containerd[1504]: time="2025-02-13T15:26:47.555423270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555470 containerd[1504]: time="2025-02-13T15:26:47.555437997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555470 containerd[1504]: time="2025-02-13T15:26:47.555450260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555470 containerd[1504]: time="2025-02-13T15:26:47.555461772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555470 containerd[1504]: time="2025-02-13T15:26:47.555474506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555643 containerd[1504]: time="2025-02-13T15:26:47.555490055Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:26:47.555643 containerd[1504]: time="2025-02-13T15:26:47.555510503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555643 containerd[1504]: time="2025-02-13T15:26:47.555522776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555643 containerd[1504]: time="2025-02-13T15:26:47.555534328Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:26:47.555643 containerd[1504]: time="2025-02-13T15:26:47.555588289Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:26:47.555643 containerd[1504]: time="2025-02-13T15:26:47.555605982Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:26:47.555643 containerd[1504]: time="2025-02-13T15:26:47.555616772Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:26:47.555643 containerd[1504]: time="2025-02-13T15:26:47.555628034Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:26:47.555643 containerd[1504]: time="2025-02-13T15:26:47.555637251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.555643 containerd[1504]: time="2025-02-13T15:26:47.555649684Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:26:47.555825 containerd[1504]: time="2025-02-13T15:26:47.555660464Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:26:47.555825 containerd[1504]: time="2025-02-13T15:26:47.555679500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:26:47.556053 containerd[1504]: time="2025-02-13T15:26:47.555933917Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:26:47.556053 containerd[1504]: time="2025-02-13T15:26:47.555981847Z" level=info msg="Connect containerd service" Feb 13 15:26:47.556053 containerd[1504]: time="2025-02-13T15:26:47.556009268Z" level=info msg="using legacy CRI server" Feb 13 15:26:47.556053 containerd[1504]: time="2025-02-13T15:26:47.556016662Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:26:47.556264 containerd[1504]: time="2025-02-13T15:26:47.556118543Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:26:47.556774 containerd[1504]: time="2025-02-13T15:26:47.556735280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:26:47.557125 containerd[1504]: time="2025-02-13T15:26:47.556919355Z" level=info msg="Start subscribing containerd event" Feb 13 15:26:47.557125 containerd[1504]: time="2025-02-13T15:26:47.557051102Z" level=info msg="Start recovering state" Feb 13 15:26:47.557125 containerd[1504]: time="2025-02-13T15:26:47.557079225Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:26:47.557218 containerd[1504]: time="2025-02-13T15:26:47.557139258Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:26:47.557274 containerd[1504]: time="2025-02-13T15:26:47.557259042Z" level=info msg="Start event monitor" Feb 13 15:26:47.557329 containerd[1504]: time="2025-02-13T15:26:47.557317893Z" level=info msg="Start snapshots syncer" Feb 13 15:26:47.557462 containerd[1504]: time="2025-02-13T15:26:47.557389266Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:26:47.557462 containerd[1504]: time="2025-02-13T15:26:47.557401389Z" level=info msg="Start streaming server" Feb 13 15:26:47.557760 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:26:47.558142 containerd[1504]: time="2025-02-13T15:26:47.558121660Z" level=info msg="containerd successfully booted in 0.040960s" Feb 13 15:26:47.723599 tar[1494]: linux-amd64/LICENSE Feb 13 15:26:47.723740 tar[1494]: linux-amd64/README.md Feb 13 15:26:47.739529 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:26:48.178597 systemd-networkd[1418]: eth0: Gained IPv6LL Feb 13 15:26:48.182211 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:26:48.184202 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:26:48.202564 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:26:48.205026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:48.207271 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:26:48.226124 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:26:48.226430 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:26:48.228094 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:26:48.229706 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:26:48.837527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:48.839610 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:26:48.841153 systemd[1]: Startup finished in 1.353s (kernel) + 5.249s (initrd) + 3.942s (userspace) = 10.545s. Feb 13 15:26:48.844287 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:26:49.350768 kubelet[1581]: E0213 15:26:49.350638 1581 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:26:49.355653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:26:49.355873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:26:49.356254 systemd[1]: kubelet.service: Consumed 1.032s CPU time. Feb 13 15:26:54.095636 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:26:54.096998 systemd[1]: Started sshd@0-10.0.0.70:22-10.0.0.1:48700.service - OpenSSH per-connection server daemon (10.0.0.1:48700). Feb 13 15:26:54.149326 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 48700 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:54.151467 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:54.161236 systemd-logind[1483]: New session 1 of user core. Feb 13 15:26:54.162884 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:26:54.171595 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:26:54.185092 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:26:54.204722 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:26:54.207964 (systemd)[1600]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:26:54.317882 systemd[1600]: Queued start job for default target default.target. Feb 13 15:26:54.329644 systemd[1600]: Created slice app.slice - User Application Slice. Feb 13 15:26:54.329670 systemd[1600]: Reached target paths.target - Paths. Feb 13 15:26:54.329683 systemd[1600]: Reached target timers.target - Timers. Feb 13 15:26:54.331267 systemd[1600]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:26:54.343035 systemd[1600]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:26:54.343158 systemd[1600]: Reached target sockets.target - Sockets. Feb 13 15:26:54.343176 systemd[1600]: Reached target basic.target - Basic System. Feb 13 15:26:54.343212 systemd[1600]: Reached target default.target - Main User Target. Feb 13 15:26:54.343246 systemd[1600]: Startup finished in 128ms. Feb 13 15:26:54.343859 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:26:54.345777 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:26:54.406325 systemd[1]: Started sshd@1-10.0.0.70:22-10.0.0.1:48706.service - OpenSSH per-connection server daemon (10.0.0.1:48706). Feb 13 15:26:54.450177 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 48706 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:54.451679 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:54.456088 systemd-logind[1483]: New session 2 of user core. Feb 13 15:26:54.470494 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:26:54.524747 sshd[1613]: Connection closed by 10.0.0.1 port 48706 Feb 13 15:26:54.525123 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:54.542715 systemd[1]: sshd@1-10.0.0.70:22-10.0.0.1:48706.service: Deactivated successfully. Feb 13 15:26:54.544683 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:26:54.546470 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:26:54.555640 systemd[1]: Started sshd@2-10.0.0.70:22-10.0.0.1:57336.service - OpenSSH per-connection server daemon (10.0.0.1:57336). Feb 13 15:26:54.556960 systemd-logind[1483]: Removed session 2. Feb 13 15:26:54.592146 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 57336 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:54.593547 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:54.597962 systemd-logind[1483]: New session 3 of user core. Feb 13 15:26:54.608515 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:26:54.658303 sshd[1620]: Connection closed by 10.0.0.1 port 57336 Feb 13 15:26:54.658793 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:54.672550 systemd[1]: sshd@2-10.0.0.70:22-10.0.0.1:57336.service: Deactivated successfully. Feb 13 15:26:54.674520 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:26:54.676272 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:26:54.681701 systemd[1]: Started sshd@3-10.0.0.70:22-10.0.0.1:57348.service - OpenSSH per-connection server daemon (10.0.0.1:57348). Feb 13 15:26:54.682665 systemd-logind[1483]: Removed session 3. Feb 13 15:26:54.716313 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 57348 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:54.718190 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:54.722507 systemd-logind[1483]: New session 4 of user core. Feb 13 15:26:54.738552 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:26:54.793704 sshd[1627]: Connection closed by 10.0.0.1 port 57348 Feb 13 15:26:54.794315 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:54.813400 systemd[1]: sshd@3-10.0.0.70:22-10.0.0.1:57348.service: Deactivated successfully. Feb 13 15:26:54.815288 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:26:54.816999 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:26:54.818548 systemd[1]: Started sshd@4-10.0.0.70:22-10.0.0.1:57350.service - OpenSSH per-connection server daemon (10.0.0.1:57350). Feb 13 15:26:54.819621 systemd-logind[1483]: Removed session 4. Feb 13 15:26:54.867057 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 57350 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:54.868693 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:54.873178 systemd-logind[1483]: New session 5 of user core. Feb 13 15:26:54.882470 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:26:54.941661 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:26:54.942076 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:26:54.958035 sudo[1635]: pam_unix(sudo:session): session closed for user root Feb 13 15:26:54.959670 sshd[1634]: Connection closed by 10.0.0.1 port 57350 Feb 13 15:26:54.960175 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:54.975870 systemd[1]: sshd@4-10.0.0.70:22-10.0.0.1:57350.service: Deactivated successfully. Feb 13 15:26:54.977963 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:26:54.980022 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:26:54.981820 systemd[1]: Started sshd@5-10.0.0.70:22-10.0.0.1:57366.service - OpenSSH per-connection server daemon (10.0.0.1:57366). Feb 13 15:26:54.982598 systemd-logind[1483]: Removed session 5. Feb 13 15:26:55.018306 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 57366 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:55.019848 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:55.023924 systemd-logind[1483]: New session 6 of user core. Feb 13 15:26:55.033482 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:26:55.088933 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:26:55.089433 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:26:55.093941 sudo[1644]: pam_unix(sudo:session): session closed for user root Feb 13 15:26:55.100998 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:26:55.101431 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:26:55.117723 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:26:55.149944 augenrules[1666]: No rules Feb 13 15:26:55.152042 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:26:55.152329 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:26:55.153694 sudo[1643]: pam_unix(sudo:session): session closed for user root Feb 13 15:26:55.155301 sshd[1642]: Connection closed by 10.0.0.1 port 57366 Feb 13 15:26:55.155719 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:55.175132 systemd[1]: sshd@5-10.0.0.70:22-10.0.0.1:57366.service: Deactivated successfully. Feb 13 15:26:55.177108 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:26:55.178921 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:26:55.193839 systemd[1]: Started sshd@6-10.0.0.70:22-10.0.0.1:57368.service - OpenSSH per-connection server daemon (10.0.0.1:57368). Feb 13 15:26:55.195089 systemd-logind[1483]: Removed session 6. Feb 13 15:26:55.231128 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 57368 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:55.232829 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:55.237087 systemd-logind[1483]: New session 7 of user core. Feb 13 15:26:55.247508 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:26:55.303524 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:26:55.303888 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:26:55.599604 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:26:55.599734 (dockerd)[1697]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:26:55.858320 dockerd[1697]: time="2025-02-13T15:26:55.858166968Z" level=info msg="Starting up" Feb 13 15:26:56.210429 dockerd[1697]: time="2025-02-13T15:26:56.210307186Z" level=info msg="Loading containers: start." Feb 13 15:26:56.388379 kernel: Initializing XFRM netlink socket Feb 13 15:26:56.486560 systemd-networkd[1418]: docker0: Link UP Feb 13 15:26:56.529046 dockerd[1697]: time="2025-02-13T15:26:56.528978449Z" level=info msg="Loading containers: done." Feb 13 15:26:56.546440 dockerd[1697]: time="2025-02-13T15:26:56.546329668Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:26:56.546661 dockerd[1697]: time="2025-02-13T15:26:56.546458109Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:26:56.546661 dockerd[1697]: time="2025-02-13T15:26:56.546599514Z" level=info msg="Daemon has completed initialization" Feb 13 15:26:56.586861 dockerd[1697]: time="2025-02-13T15:26:56.586780238Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:26:56.587124 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:26:57.306585 containerd[1504]: time="2025-02-13T15:26:57.306514957Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:26:58.150862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342518148.mount: Deactivated successfully. Feb 13 15:26:59.509942 containerd[1504]: time="2025-02-13T15:26:59.509847462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:59.510643 containerd[1504]: time="2025-02-13T15:26:59.510553316Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=35142283" Feb 13 15:26:59.511844 containerd[1504]: time="2025-02-13T15:26:59.511788202Z" level=info msg="ImageCreate event name:\"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:59.517311 containerd[1504]: time="2025-02-13T15:26:59.517114220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:59.520233 containerd[1504]: time="2025-02-13T15:26:59.520173056Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"35139083\" in 2.213611352s" Feb 13 15:26:59.520233 containerd[1504]: time="2025-02-13T15:26:59.520231987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\"" Feb 13 15:26:59.545508 containerd[1504]: time="2025-02-13T15:26:59.545452935Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:26:59.606178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:26:59.620546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:59.777843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:59.783710 (kubelet)[1970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:26:59.857707 kubelet[1970]: E0213 15:26:59.857613 1970 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:26:59.866493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:26:59.866769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:01.801092 containerd[1504]: time="2025-02-13T15:27:01.801003297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:01.801820 containerd[1504]: time="2025-02-13T15:27:01.801764785Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=32213164" Feb 13 15:27:01.803026 containerd[1504]: time="2025-02-13T15:27:01.802992508Z" level=info msg="ImageCreate event name:\"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:01.806241 containerd[1504]: time="2025-02-13T15:27:01.806192909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:01.807579 containerd[1504]: time="2025-02-13T15:27:01.807523525Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"33659710\" in 2.262030905s" Feb 13 15:27:01.807579 containerd[1504]: time="2025-02-13T15:27:01.807563480Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\"" Feb 13 15:27:01.829655 containerd[1504]: time="2025-02-13T15:27:01.829568818Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:27:02.951047 containerd[1504]: time="2025-02-13T15:27:02.950991325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:02.951835 containerd[1504]: time="2025-02-13T15:27:02.951795183Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=17334056" Feb 13 15:27:02.952911 containerd[1504]: time="2025-02-13T15:27:02.952886479Z" level=info msg="ImageCreate event name:\"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:02.955576 containerd[1504]: time="2025-02-13T15:27:02.955543702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:02.956626 containerd[1504]: time="2025-02-13T15:27:02.956594523Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"18780620\" in 1.126977996s" Feb 13 15:27:02.956663 containerd[1504]: time="2025-02-13T15:27:02.956628407Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\"" Feb 13 15:27:02.977376 containerd[1504]: time="2025-02-13T15:27:02.977317115Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:27:03.933846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794916252.mount: Deactivated successfully. Feb 13 15:27:04.539236 containerd[1504]: time="2025-02-13T15:27:04.539167092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:04.540026 containerd[1504]: time="2025-02-13T15:27:04.539990396Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592" Feb 13 15:27:04.542291 containerd[1504]: time="2025-02-13T15:27:04.542251947Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:04.544898 containerd[1504]: time="2025-02-13T15:27:04.544867592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:04.545655 containerd[1504]: time="2025-02-13T15:27:04.545597130Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 1.568241012s" Feb 13 15:27:04.545655 containerd[1504]: time="2025-02-13T15:27:04.545652644Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:27:04.567651 containerd[1504]: time="2025-02-13T15:27:04.567605223Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:27:05.112529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158778129.mount: Deactivated successfully. Feb 13 15:27:05.787449 containerd[1504]: time="2025-02-13T15:27:05.787370398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:05.788006 containerd[1504]: time="2025-02-13T15:27:05.787942551Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:27:05.789111 containerd[1504]: time="2025-02-13T15:27:05.789048986Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:05.791687 containerd[1504]: time="2025-02-13T15:27:05.791658380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:05.792795 containerd[1504]: time="2025-02-13T15:27:05.792755287Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.225102905s" Feb 13 15:27:05.792857 containerd[1504]: time="2025-02-13T15:27:05.792795933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:27:05.818281 containerd[1504]: time="2025-02-13T15:27:05.818246642Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:27:06.458777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1039479015.mount: Deactivated successfully. Feb 13 15:27:06.464064 containerd[1504]: time="2025-02-13T15:27:06.464012333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:06.464790 containerd[1504]: time="2025-02-13T15:27:06.464739126Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:27:06.465985 containerd[1504]: time="2025-02-13T15:27:06.465953534Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:06.468060 containerd[1504]: time="2025-02-13T15:27:06.468028115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:06.468705 containerd[1504]: time="2025-02-13T15:27:06.468662805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 650.38814ms" Feb 13 15:27:06.468705 containerd[1504]: time="2025-02-13T15:27:06.468702770Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:27:06.500750 containerd[1504]: time="2025-02-13T15:27:06.500652367Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:27:07.036160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2731707150.mount: Deactivated successfully. Feb 13 15:27:09.105119 containerd[1504]: time="2025-02-13T15:27:09.105042548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:09.124994 containerd[1504]: time="2025-02-13T15:27:09.124913844Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Feb 13 15:27:09.146808 containerd[1504]: time="2025-02-13T15:27:09.146770743Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:09.164319 containerd[1504]: time="2025-02-13T15:27:09.164285329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:09.165428 containerd[1504]: time="2025-02-13T15:27:09.165395020Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.664667072s" Feb 13 15:27:09.165480 containerd[1504]: time="2025-02-13T15:27:09.165429204Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Feb 13 15:27:10.117114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:27:10.126593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:10.318237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:10.325478 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:10.417082 kubelet[2204]: E0213 15:27:10.416897 2204 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:10.428382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:10.428679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:12.316259 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:12.324640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:12.345450 systemd[1]: Reloading requested from client PID 2220 ('systemctl') (unit session-7.scope)... Feb 13 15:27:12.345481 systemd[1]: Reloading... Feb 13 15:27:12.434385 zram_generator::config[2259]: No configuration found. Feb 13 15:27:12.807398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:12.884531 systemd[1]: Reloading finished in 538 ms. Feb 13 15:27:12.939869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:12.944620 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:27:12.944877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:12.946483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:13.105369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:13.117475 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:27:13.221654 kubelet[2309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:13.221654 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:27:13.221654 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:13.221654 kubelet[2309]: I0213 15:27:13.220083 2309 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:27:13.889726 kubelet[2309]: I0213 15:27:13.889680 2309 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:27:13.889726 kubelet[2309]: I0213 15:27:13.889718 2309 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:27:13.890001 kubelet[2309]: I0213 15:27:13.889971 2309 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:27:13.925386 kubelet[2309]: I0213 15:27:13.925317 2309 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:27:13.926019 kubelet[2309]: E0213 15:27:13.925967 2309 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:13.943994 kubelet[2309]: I0213 15:27:13.943894 2309 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:27:13.946465 kubelet[2309]: I0213 15:27:13.946419 2309 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:27:13.946688 kubelet[2309]: I0213 15:27:13.946658 2309 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:27:13.946818 kubelet[2309]: I0213 15:27:13.946691 2309 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:27:13.946818 kubelet[2309]: I0213 15:27:13.946701 2309 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:27:13.946919 kubelet[2309]: I0213 15:27:13.946900 2309 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:13.947120 kubelet[2309]: I0213 15:27:13.947031 2309 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:27:13.947120 kubelet[2309]: I0213 15:27:13.947052 2309 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:27:13.947120 kubelet[2309]: I0213 15:27:13.947085 2309 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:27:13.947120 kubelet[2309]: I0213 15:27:13.947101 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:27:13.947927 kubelet[2309]: W0213 15:27:13.947856 2309 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:13.947927 kubelet[2309]: W0213 15:27:13.947857 2309 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:13.947927 kubelet[2309]: E0213 15:27:13.947892 2309 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:13.947927 kubelet[2309]: E0213 15:27:13.947903 2309 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:13.950392 kubelet[2309]: I0213 15:27:13.949999 2309 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:27:13.953300 kubelet[2309]: I0213 15:27:13.953239 2309 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:27:13.954412 kubelet[2309]: W0213 15:27:13.954378 2309 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:27:13.955230 kubelet[2309]: I0213 15:27:13.955189 2309 server.go:1256] "Started kubelet" Feb 13 15:27:13.955318 kubelet[2309]: I0213 15:27:13.955293 2309 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:27:13.956127 kubelet[2309]: I0213 15:27:13.956072 2309 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:27:13.956942 kubelet[2309]: I0213 15:27:13.956917 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:27:13.957004 kubelet[2309]: I0213 15:27:13.956986 2309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:27:13.957811 kubelet[2309]: I0213 15:27:13.957170 2309 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:27:13.960625 kubelet[2309]: E0213 15:27:13.960473 2309 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:27:13.960625 kubelet[2309]: I0213 15:27:13.960518 2309 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:27:13.960625 kubelet[2309]: I0213 15:27:13.960622 2309 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:27:13.960782 kubelet[2309]: I0213 15:27:13.960680 2309 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:27:13.961090 kubelet[2309]: W0213 15:27:13.961039 2309 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:13.961090 kubelet[2309]: E0213 15:27:13.961092 2309 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:13.961992 kubelet[2309]: E0213 15:27:13.961389 2309 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce0c72bea28a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:27:13.955168906 +0000 UTC m=+0.826141902,LastTimestamp:2025-02-13 15:27:13.955168906 +0000 UTC m=+0.826141902,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:27:13.961992 kubelet[2309]: I0213 15:27:13.961802 2309 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:27:13.962195 kubelet[2309]: I0213 15:27:13.962009 2309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:27:13.963627 kubelet[2309]: E0213 15:27:13.962432 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="200ms" Feb 13 15:27:13.963627 kubelet[2309]: E0213 15:27:13.962608 2309 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:27:13.964344 kubelet[2309]: I0213 15:27:13.964225 2309 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:27:13.991213 kubelet[2309]: I0213 15:27:13.990978 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:27:13.991213 kubelet[2309]: I0213 15:27:13.991095 2309 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:27:13.991213 kubelet[2309]: I0213 15:27:13.991116 2309 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:27:13.991213 kubelet[2309]: I0213 15:27:13.991138 2309 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:13.993420 kubelet[2309]: I0213 15:27:13.993287 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:27:13.993420 kubelet[2309]: I0213 15:27:13.993328 2309 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:27:13.993420 kubelet[2309]: I0213 15:27:13.993373 2309 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:27:13.993420 kubelet[2309]: E0213 15:27:13.993450 2309 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:27:14.002224 kubelet[2309]: W0213 15:27:14.002078 2309 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:14.002224 kubelet[2309]: E0213 15:27:14.002190 2309 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:14.064392 kubelet[2309]: I0213 15:27:14.064321 2309 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:27:14.064837 kubelet[2309]: E0213 15:27:14.064714 2309 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Feb 13 15:27:14.095836 kubelet[2309]: E0213 15:27:14.095704 2309 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:27:14.164055 kubelet[2309]: E0213 15:27:14.163880 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="400ms" Feb 13 15:27:14.268339 kubelet[2309]: I0213 15:27:14.266888 2309 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:27:14.269187 kubelet[2309]: E0213 15:27:14.269093 2309 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Feb 13 15:27:14.277916 kubelet[2309]: I0213 15:27:14.277752 2309 policy_none.go:49] "None policy: Start" Feb 13 15:27:14.284209 kubelet[2309]: I0213 15:27:14.284158 2309 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:27:14.284299 kubelet[2309]: I0213 15:27:14.284219 2309 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:27:14.302803 kubelet[2309]: E0213 15:27:14.302721 2309 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:27:14.314819 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:27:14.338687 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:27:14.343313 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:27:14.356796 kubelet[2309]: I0213 15:27:14.356743 2309 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:27:14.357153 kubelet[2309]: I0213 15:27:14.357123 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:27:14.367016 kubelet[2309]: E0213 15:27:14.366696 2309 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:27:14.565960 kubelet[2309]: E0213 15:27:14.565784 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="800ms" Feb 13 15:27:14.672126 kubelet[2309]: I0213 15:27:14.672044 2309 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:27:14.672762 kubelet[2309]: E0213 15:27:14.672718 2309 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Feb 13 15:27:14.703998 kubelet[2309]: I0213 15:27:14.703914 2309 topology_manager.go:215] "Topology Admit Handler" podUID="dca0ccf6b994af5f0d8b06daf9445796" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:27:14.708612 kubelet[2309]: I0213 15:27:14.705672 2309 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:27:14.708612 kubelet[2309]: I0213 15:27:14.707277 2309 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:27:14.719152 systemd[1]: Created slice kubepods-burstable-poddca0ccf6b994af5f0d8b06daf9445796.slice - libcontainer container kubepods-burstable-poddca0ccf6b994af5f0d8b06daf9445796.slice. Feb 13 15:27:14.756573 systemd[1]: Created slice kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice - libcontainer container kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice. Feb 13 15:27:14.767969 kubelet[2309]: I0213 15:27:14.767581 2309 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dca0ccf6b994af5f0d8b06daf9445796-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dca0ccf6b994af5f0d8b06daf9445796\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:14.767969 kubelet[2309]: I0213 15:27:14.767650 2309 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dca0ccf6b994af5f0d8b06daf9445796-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dca0ccf6b994af5f0d8b06daf9445796\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:14.767969 kubelet[2309]: I0213 15:27:14.767682 2309 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:14.767969 kubelet[2309]: I0213 15:27:14.767708 2309 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:14.767969 kubelet[2309]: I0213 15:27:14.767732 2309 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:14.768261 kubelet[2309]: I0213 15:27:14.767755 2309 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:27:14.768261 kubelet[2309]: I0213 15:27:14.767800 2309 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dca0ccf6b994af5f0d8b06daf9445796-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dca0ccf6b994af5f0d8b06daf9445796\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:14.768261 kubelet[2309]: I0213 15:27:14.767824 2309 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:14.768261 kubelet[2309]: I0213 15:27:14.767847 2309 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:14.774698 systemd[1]: Created slice kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice - libcontainer container kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice. Feb 13 15:27:14.956946 kubelet[2309]: W0213 15:27:14.956777 2309 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:14.956946 kubelet[2309]: E0213 15:27:14.956849 2309 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:15.048060 kubelet[2309]: E0213 15:27:15.048008 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:15.048808 containerd[1504]: time="2025-02-13T15:27:15.048760831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dca0ccf6b994af5f0d8b06daf9445796,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:15.070297 kubelet[2309]: E0213 15:27:15.070246 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:15.070893 containerd[1504]: time="2025-02-13T15:27:15.070855296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:15.081211 kubelet[2309]: E0213 15:27:15.081162 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:15.081728 containerd[1504]: time="2025-02-13T15:27:15.081684164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:15.082061 kubelet[2309]: W0213 15:27:15.082028 2309 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:15.082061 kubelet[2309]: E0213 15:27:15.082061 2309 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:15.194690 kubelet[2309]: W0213 15:27:15.194582 2309 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:15.194690 kubelet[2309]: E0213 15:27:15.194667 2309 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:15.367301 kubelet[2309]: E0213 15:27:15.367122 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="1.6s" Feb 13 15:27:15.474290 kubelet[2309]: I0213 15:27:15.474248 2309 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:27:15.474757 kubelet[2309]: E0213 15:27:15.474721 2309 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Feb 13 15:27:15.531491 kubelet[2309]: W0213 15:27:15.531380 2309 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:15.531491 kubelet[2309]: E0213 15:27:15.531452 2309 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:16.097880 kubelet[2309]: E0213 15:27:16.097836 2309 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.70:6443: connect: connection refused Feb 13 15:27:16.634853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046973222.mount: Deactivated successfully. Feb 13 15:27:16.649378 containerd[1504]: time="2025-02-13T15:27:16.649254578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:16.653730 containerd[1504]: time="2025-02-13T15:27:16.653634903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:27:16.654728 containerd[1504]: time="2025-02-13T15:27:16.654687878Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:16.655736 containerd[1504]: time="2025-02-13T15:27:16.655685519Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:16.656824 containerd[1504]: time="2025-02-13T15:27:16.656771746Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:16.658434 containerd[1504]: time="2025-02-13T15:27:16.658313818Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:27:16.659745 containerd[1504]: time="2025-02-13T15:27:16.659696661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:27:16.661857 containerd[1504]: time="2025-02-13T15:27:16.661812619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:16.664161 containerd[1504]: time="2025-02-13T15:27:16.664113634Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.582329373s" Feb 13 15:27:16.665073 containerd[1504]: time="2025-02-13T15:27:16.665004345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.616132806s" Feb 13 15:27:16.668903 containerd[1504]: time="2025-02-13T15:27:16.668834287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.597882239s" Feb 13 15:27:16.790908 containerd[1504]: time="2025-02-13T15:27:16.789178315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:16.790908 containerd[1504]: time="2025-02-13T15:27:16.789246203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:16.790908 containerd[1504]: time="2025-02-13T15:27:16.789260449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:16.790908 containerd[1504]: time="2025-02-13T15:27:16.789338155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:16.792230 containerd[1504]: time="2025-02-13T15:27:16.792068315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:16.792230 containerd[1504]: time="2025-02-13T15:27:16.792151271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:16.792230 containerd[1504]: time="2025-02-13T15:27:16.792172210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:16.792393 containerd[1504]: time="2025-02-13T15:27:16.792270464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:16.793013 containerd[1504]: time="2025-02-13T15:27:16.792794016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:16.793013 containerd[1504]: time="2025-02-13T15:27:16.792842256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:16.793013 containerd[1504]: time="2025-02-13T15:27:16.792858787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:16.794434 containerd[1504]: time="2025-02-13T15:27:16.794310159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:16.819612 systemd[1]: Started cri-containerd-33a60328c6d7dbf28a5ea4fa3865dc96df7c66e338be363d2d1c55790ded7492.scope - libcontainer container 33a60328c6d7dbf28a5ea4fa3865dc96df7c66e338be363d2d1c55790ded7492. Feb 13 15:27:16.825831 systemd[1]: Started cri-containerd-6c1851e82bee16e3dbbf474182651b0816c5e91758e5e0b8b050cd3e129acdf5.scope - libcontainer container 6c1851e82bee16e3dbbf474182651b0816c5e91758e5e0b8b050cd3e129acdf5. Feb 13 15:27:16.829037 systemd[1]: Started cri-containerd-adbad8a8e4ea6a084905a818dc25012564479c8745e6abfbdf2b8f2a9a123dee.scope - libcontainer container adbad8a8e4ea6a084905a818dc25012564479c8745e6abfbdf2b8f2a9a123dee. Feb 13 15:27:16.866193 containerd[1504]: time="2025-02-13T15:27:16.865904815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dca0ccf6b994af5f0d8b06daf9445796,Namespace:kube-system,Attempt:0,} returns sandbox id \"33a60328c6d7dbf28a5ea4fa3865dc96df7c66e338be363d2d1c55790ded7492\"" Feb 13 15:27:16.867598 kubelet[2309]: E0213 15:27:16.867118 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:16.871280 containerd[1504]: time="2025-02-13T15:27:16.871245811Z" level=info msg="CreateContainer within sandbox \"33a60328c6d7dbf28a5ea4fa3865dc96df7c66e338be363d2d1c55790ded7492\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:27:16.876837 containerd[1504]: time="2025-02-13T15:27:16.876778196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c1851e82bee16e3dbbf474182651b0816c5e91758e5e0b8b050cd3e129acdf5\"" Feb 13 15:27:16.877656 kubelet[2309]: E0213 15:27:16.877614 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:16.880288 containerd[1504]: time="2025-02-13T15:27:16.880248935Z" level=info msg="CreateContainer within sandbox \"6c1851e82bee16e3dbbf474182651b0816c5e91758e5e0b8b050cd3e129acdf5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:27:16.881878 containerd[1504]: time="2025-02-13T15:27:16.881840350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,} returns sandbox id \"adbad8a8e4ea6a084905a818dc25012564479c8745e6abfbdf2b8f2a9a123dee\"" Feb 13 15:27:16.882377 kubelet[2309]: E0213 15:27:16.882329 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:16.884664 containerd[1504]: time="2025-02-13T15:27:16.884622858Z" level=info msg="CreateContainer within sandbox \"adbad8a8e4ea6a084905a818dc25012564479c8745e6abfbdf2b8f2a9a123dee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:27:16.900953 containerd[1504]: time="2025-02-13T15:27:16.900837045Z" level=info msg="CreateContainer within sandbox \"33a60328c6d7dbf28a5ea4fa3865dc96df7c66e338be363d2d1c55790ded7492\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"82d5a9cffaaeb6a390c541f7c09407a77c2afdd2a799a97e1cd0f330f55473e3\"" Feb 13 15:27:16.901685 containerd[1504]: time="2025-02-13T15:27:16.901630844Z" level=info msg="StartContainer for \"82d5a9cffaaeb6a390c541f7c09407a77c2afdd2a799a97e1cd0f330f55473e3\"" Feb 13 15:27:16.904372 containerd[1504]: time="2025-02-13T15:27:16.904325758Z" level=info msg="CreateContainer within sandbox \"6c1851e82bee16e3dbbf474182651b0816c5e91758e5e0b8b050cd3e129acdf5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"451e155dfbd97bfa8963a1f349f5a01552910bbfa4a15752a04da005fbe91417\"" Feb 13 15:27:16.904674 containerd[1504]: time="2025-02-13T15:27:16.904649455Z" level=info msg="StartContainer for \"451e155dfbd97bfa8963a1f349f5a01552910bbfa4a15752a04da005fbe91417\"" Feb 13 15:27:16.915984 containerd[1504]: time="2025-02-13T15:27:16.915919871Z" level=info msg="CreateContainer within sandbox \"adbad8a8e4ea6a084905a818dc25012564479c8745e6abfbdf2b8f2a9a123dee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fd92e1a5eb78d3f10d927352f3668d316338e33ed7822849f0646cc1070abd1b\"" Feb 13 15:27:16.917698 containerd[1504]: time="2025-02-13T15:27:16.917658552Z" level=info msg="StartContainer for \"fd92e1a5eb78d3f10d927352f3668d316338e33ed7822849f0646cc1070abd1b\"" Feb 13 15:27:16.933583 systemd[1]: Started cri-containerd-82d5a9cffaaeb6a390c541f7c09407a77c2afdd2a799a97e1cd0f330f55473e3.scope - libcontainer container 82d5a9cffaaeb6a390c541f7c09407a77c2afdd2a799a97e1cd0f330f55473e3. Feb 13 15:27:16.938154 systemd[1]: Started cri-containerd-451e155dfbd97bfa8963a1f349f5a01552910bbfa4a15752a04da005fbe91417.scope - libcontainer container 451e155dfbd97bfa8963a1f349f5a01552910bbfa4a15752a04da005fbe91417. Feb 13 15:27:16.957556 systemd[1]: Started cri-containerd-fd92e1a5eb78d3f10d927352f3668d316338e33ed7822849f0646cc1070abd1b.scope - libcontainer container fd92e1a5eb78d3f10d927352f3668d316338e33ed7822849f0646cc1070abd1b. Feb 13 15:27:16.968272 kubelet[2309]: E0213 15:27:16.968234 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="3.2s" Feb 13 15:27:16.995924 containerd[1504]: time="2025-02-13T15:27:16.995870948Z" level=info msg="StartContainer for \"451e155dfbd97bfa8963a1f349f5a01552910bbfa4a15752a04da005fbe91417\" returns successfully" Feb 13 15:27:16.996159 containerd[1504]: time="2025-02-13T15:27:16.995887829Z" level=info msg="StartContainer for \"82d5a9cffaaeb6a390c541f7c09407a77c2afdd2a799a97e1cd0f330f55473e3\" returns successfully" Feb 13 15:27:17.009956 containerd[1504]: time="2025-02-13T15:27:17.009905498Z" level=info msg="StartContainer for \"fd92e1a5eb78d3f10d927352f3668d316338e33ed7822849f0646cc1070abd1b\" returns successfully" Feb 13 15:27:17.026696 kubelet[2309]: E0213 15:27:17.026659 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:17.028037 kubelet[2309]: E0213 15:27:17.028013 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:17.029992 kubelet[2309]: E0213 15:27:17.029970 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:17.077339 kubelet[2309]: I0213 15:27:17.077302 2309 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:27:17.950494 kubelet[2309]: I0213 15:27:17.950401 2309 apiserver.go:52] "Watching apiserver" Feb 13 15:27:17.961481 kubelet[2309]: I0213 15:27:17.961413 2309 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:27:17.979642 kubelet[2309]: I0213 15:27:17.979601 2309 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:27:18.033504 kubelet[2309]: E0213 15:27:18.033445 2309 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:18.033668 kubelet[2309]: E0213 15:27:18.033445 2309 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:18.033896 kubelet[2309]: E0213 15:27:18.033880 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:18.033921 kubelet[2309]: E0213 15:27:18.033904 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:19.106116 kubelet[2309]: E0213 15:27:19.106055 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:19.106667 kubelet[2309]: E0213 15:27:19.106329 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:20.032627 kubelet[2309]: E0213 15:27:20.032559 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:20.032779 kubelet[2309]: E0213 15:27:20.032655 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:22.544869 systemd[1]: Reloading requested from client PID 2591 ('systemctl') (unit session-7.scope)... Feb 13 15:27:22.544891 systemd[1]: Reloading... Feb 13 15:27:22.634467 zram_generator::config[2636]: No configuration found. Feb 13 15:27:22.750620 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:22.852792 systemd[1]: Reloading finished in 307 ms. Feb 13 15:27:22.907279 kubelet[2309]: I0213 15:27:22.907213 2309 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:27:22.907519 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:22.920138 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:27:22.920448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:22.920517 systemd[1]: kubelet.service: Consumed 1.440s CPU time, 114.9M memory peak, 0B memory swap peak. Feb 13 15:27:22.927652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:23.097452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:23.104495 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:27:23.159524 kubelet[2675]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:23.159524 kubelet[2675]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:27:23.159524 kubelet[2675]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:23.159964 kubelet[2675]: I0213 15:27:23.159568 2675 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:27:23.164652 kubelet[2675]: I0213 15:27:23.164600 2675 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:27:23.164652 kubelet[2675]: I0213 15:27:23.164634 2675 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:27:23.164951 kubelet[2675]: I0213 15:27:23.164925 2675 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:27:23.166435 kubelet[2675]: I0213 15:27:23.166409 2675 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:27:23.168448 kubelet[2675]: I0213 15:27:23.168392 2675 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:27:23.176363 kubelet[2675]: I0213 15:27:23.176312 2675 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:27:23.176618 kubelet[2675]: I0213 15:27:23.176592 2675 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:27:23.176767 kubelet[2675]: I0213 15:27:23.176742 2675 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:27:23.176846 kubelet[2675]: I0213 15:27:23.176771 2675 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:27:23.176846 kubelet[2675]: I0213 15:27:23.176787 2675 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:27:23.176846 kubelet[2675]: I0213 15:27:23.176817 2675 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:23.176922 kubelet[2675]: I0213 15:27:23.176910 2675 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:27:23.176947 kubelet[2675]: I0213 15:27:23.176925 2675 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:27:23.176971 kubelet[2675]: I0213 15:27:23.176958 2675 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:27:23.176999 kubelet[2675]: I0213 15:27:23.176976 2675 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:27:23.178714 kubelet[2675]: I0213 15:27:23.178660 2675 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:27:23.180320 kubelet[2675]: I0213 15:27:23.179012 2675 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:27:23.180320 kubelet[2675]: I0213 15:27:23.179694 2675 server.go:1256] "Started kubelet" Feb 13 15:27:23.180509 kubelet[2675]: I0213 15:27:23.180473 2675 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:27:23.181504 kubelet[2675]: I0213 15:27:23.180686 2675 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:27:23.181504 kubelet[2675]: I0213 15:27:23.181126 2675 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:27:23.182296 kubelet[2675]: I0213 15:27:23.182278 2675 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:27:23.184640 kubelet[2675]: I0213 15:27:23.184621 2675 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:27:23.192941 kubelet[2675]: I0213 15:27:23.191107 2675 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:27:23.192941 kubelet[2675]: I0213 15:27:23.191450 2675 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:27:23.192941 kubelet[2675]: I0213 15:27:23.191696 2675 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:27:23.195634 kubelet[2675]: I0213 15:27:23.195591 2675 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:27:23.195829 kubelet[2675]: I0213 15:27:23.195707 2675 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:27:23.197470 kubelet[2675]: I0213 15:27:23.197430 2675 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:27:23.201096 kubelet[2675]: I0213 15:27:23.201069 2675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:27:23.202467 kubelet[2675]: I0213 15:27:23.202444 2675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:27:23.202557 kubelet[2675]: I0213 15:27:23.202544 2675 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:27:23.202622 kubelet[2675]: I0213 15:27:23.202612 2675 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:27:23.202741 kubelet[2675]: E0213 15:27:23.202727 2675 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:27:23.243228 kubelet[2675]: I0213 15:27:23.243185 2675 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:27:23.243228 kubelet[2675]: I0213 15:27:23.243211 2675 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:27:23.243228 kubelet[2675]: I0213 15:27:23.243231 2675 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:23.243537 kubelet[2675]: I0213 15:27:23.243434 2675 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:27:23.243537 kubelet[2675]: I0213 15:27:23.243469 2675 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:27:23.243537 kubelet[2675]: I0213 15:27:23.243477 2675 policy_none.go:49] "None policy: Start" Feb 13 15:27:23.244219 kubelet[2675]: I0213 15:27:23.244198 2675 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:27:23.244320 kubelet[2675]: I0213 15:27:23.244289 2675 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:27:23.244630 kubelet[2675]: I0213 15:27:23.244610 2675 state_mem.go:75] "Updated machine memory state" Feb 13 15:27:23.249526 kubelet[2675]: I0213 15:27:23.249496 2675 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:27:23.249838 kubelet[2675]: I0213 15:27:23.249813 2675 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:27:23.297602 kubelet[2675]: I0213 15:27:23.297564 2675 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:27:23.303860 kubelet[2675]: I0213 15:27:23.303807 2675 topology_manager.go:215] "Topology Admit Handler" podUID="dca0ccf6b994af5f0d8b06daf9445796" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:27:23.303997 kubelet[2675]: I0213 15:27:23.303896 2675 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:27:23.303997 kubelet[2675]: I0213 15:27:23.303926 2675 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:27:23.393101 kubelet[2675]: I0213 15:27:23.392954 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dca0ccf6b994af5f0d8b06daf9445796-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dca0ccf6b994af5f0d8b06daf9445796\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:23.393101 kubelet[2675]: I0213 15:27:23.393003 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:23.393101 kubelet[2675]: I0213 15:27:23.393032 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:23.393101 kubelet[2675]: I0213 15:27:23.393051 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dca0ccf6b994af5f0d8b06daf9445796-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dca0ccf6b994af5f0d8b06daf9445796\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:23.393101 kubelet[2675]: I0213 15:27:23.393068 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dca0ccf6b994af5f0d8b06daf9445796-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dca0ccf6b994af5f0d8b06daf9445796\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:23.393314 kubelet[2675]: I0213 15:27:23.393086 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:23.393314 kubelet[2675]: I0213 15:27:23.393106 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:23.393314 kubelet[2675]: I0213 15:27:23.393127 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:23.393314 kubelet[2675]: I0213 15:27:23.393194 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:27:23.403916 kubelet[2675]: E0213 15:27:23.403836 2675 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:23.405344 kubelet[2675]: E0213 15:27:23.405203 2675 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:23.405967 kubelet[2675]: I0213 15:27:23.405738 2675 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:27:23.405967 kubelet[2675]: I0213 15:27:23.405829 2675 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:27:23.705412 kubelet[2675]: E0213 15:27:23.705239 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:23.705412 kubelet[2675]: E0213 15:27:23.705274 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:23.705822 kubelet[2675]: E0213 15:27:23.705643 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:24.178558 kubelet[2675]: I0213 15:27:24.178396 2675 apiserver.go:52] "Watching apiserver" Feb 13 15:27:24.192008 kubelet[2675]: I0213 15:27:24.191975 2675 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:27:24.226380 kubelet[2675]: E0213 15:27:24.226313 2675 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:27:24.226650 kubelet[2675]: E0213 15:27:24.226626 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:24.227028 kubelet[2675]: E0213 15:27:24.227001 2675 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:24.227656 kubelet[2675]: E0213 15:27:24.227587 2675 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:24.228647 kubelet[2675]: E0213 15:27:24.228616 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:24.229058 kubelet[2675]: E0213 15:27:24.229030 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:24.298841 kubelet[2675]: I0213 15:27:24.298751 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.298699234 podStartE2EDuration="5.298699234s" podCreationTimestamp="2025-02-13 15:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:24.298630483 +0000 UTC m=+1.188945451" watchObservedRunningTime="2025-02-13 15:27:24.298699234 +0000 UTC m=+1.189014192" Feb 13 15:27:24.315419 kubelet[2675]: I0213 15:27:24.315373 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.31530797 podStartE2EDuration="1.31530797s" podCreationTimestamp="2025-02-13 15:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:24.309131005 +0000 UTC m=+1.199445963" watchObservedRunningTime="2025-02-13 15:27:24.31530797 +0000 UTC m=+1.205622928" Feb 13 15:27:24.327427 kubelet[2675]: I0213 15:27:24.327374 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.327311182 podStartE2EDuration="5.327311182s" podCreationTimestamp="2025-02-13 15:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:24.315969494 +0000 UTC m=+1.206284452" watchObservedRunningTime="2025-02-13 15:27:24.327311182 +0000 UTC m=+1.217626140" Feb 13 15:27:25.219687 kubelet[2675]: E0213 15:27:25.219326 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:25.219687 kubelet[2675]: E0213 15:27:25.219326 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:25.219687 kubelet[2675]: E0213 15:27:25.219637 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:27.381035 kubelet[2675]: E0213 15:27:27.381002 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:27.987595 sudo[1677]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:27.989525 sshd[1676]: Connection closed by 10.0.0.1 port 57368 Feb 13 15:27:27.990007 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:27.994463 systemd[1]: sshd@6-10.0.0.70:22-10.0.0.1:57368.service: Deactivated successfully. Feb 13 15:27:27.997083 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:27:27.997309 systemd[1]: session-7.scope: Consumed 5.284s CPU time, 188.6M memory peak, 0B memory swap peak. Feb 13 15:27:27.997852 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:27:27.998775 systemd-logind[1483]: Removed session 7. Feb 13 15:27:28.223248 kubelet[2675]: E0213 15:27:28.223219 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:30.766010 kubelet[2675]: E0213 15:27:30.765948 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:31.226876 kubelet[2675]: E0213 15:27:31.226747 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:32.300462 update_engine[1486]: I20250213 15:27:32.300398 1486 update_attempter.cc:509] Updating boot flags... Feb 13 15:27:32.333382 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2771) Feb 13 15:27:32.373104 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2769) Feb 13 15:27:32.396009 kubelet[2675]: E0213 15:27:32.395953 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:32.410611 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2769) Feb 13 15:27:33.229065 kubelet[2675]: E0213 15:27:33.229030 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:34.154843 kubelet[2675]: I0213 15:27:34.154521 2675 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:27:34.156272 containerd[1504]: time="2025-02-13T15:27:34.156042185Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:27:34.156902 kubelet[2675]: I0213 15:27:34.156589 2675 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:27:34.712332 kubelet[2675]: I0213 15:27:34.712175 2675 topology_manager.go:215] "Topology Admit Handler" podUID="813a14c5-7e9f-45bb-aa48-c957618044a9" podNamespace="kube-system" podName="kube-proxy-nr2rv" Feb 13 15:27:34.722994 systemd[1]: Created slice kubepods-besteffort-pod813a14c5_7e9f_45bb_aa48_c957618044a9.slice - libcontainer container kubepods-besteffort-pod813a14c5_7e9f_45bb_aa48_c957618044a9.slice. Feb 13 15:27:34.767708 kubelet[2675]: I0213 15:27:34.767643 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/813a14c5-7e9f-45bb-aa48-c957618044a9-kube-proxy\") pod \"kube-proxy-nr2rv\" (UID: \"813a14c5-7e9f-45bb-aa48-c957618044a9\") " pod="kube-system/kube-proxy-nr2rv" Feb 13 15:27:34.767708 kubelet[2675]: I0213 15:27:34.767695 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnmwz\" (UniqueName: \"kubernetes.io/projected/813a14c5-7e9f-45bb-aa48-c957618044a9-kube-api-access-rnmwz\") pod \"kube-proxy-nr2rv\" (UID: \"813a14c5-7e9f-45bb-aa48-c957618044a9\") " pod="kube-system/kube-proxy-nr2rv" Feb 13 15:27:34.767708 kubelet[2675]: I0213 15:27:34.767718 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/813a14c5-7e9f-45bb-aa48-c957618044a9-xtables-lock\") pod \"kube-proxy-nr2rv\" (UID: \"813a14c5-7e9f-45bb-aa48-c957618044a9\") " pod="kube-system/kube-proxy-nr2rv" Feb 13 15:27:34.767958 kubelet[2675]: I0213 15:27:34.767737 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/813a14c5-7e9f-45bb-aa48-c957618044a9-lib-modules\") pod \"kube-proxy-nr2rv\" (UID: \"813a14c5-7e9f-45bb-aa48-c957618044a9\") " pod="kube-system/kube-proxy-nr2rv" Feb 13 15:27:34.820149 kubelet[2675]: I0213 15:27:34.820100 2675 topology_manager.go:215] "Topology Admit Handler" podUID="9993aa48-8ca9-4225-8e99-b8a4da9a4dff" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-zrxdj" Feb 13 15:27:34.828853 systemd[1]: Created slice kubepods-besteffort-pod9993aa48_8ca9_4225_8e99_b8a4da9a4dff.slice - libcontainer container kubepods-besteffort-pod9993aa48_8ca9_4225_8e99_b8a4da9a4dff.slice. Feb 13 15:27:34.868518 kubelet[2675]: I0213 15:27:34.868433 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9993aa48-8ca9-4225-8e99-b8a4da9a4dff-var-lib-calico\") pod \"tigera-operator-c7ccbd65-zrxdj\" (UID: \"9993aa48-8ca9-4225-8e99-b8a4da9a4dff\") " pod="tigera-operator/tigera-operator-c7ccbd65-zrxdj" Feb 13 15:27:34.868518 kubelet[2675]: I0213 15:27:34.868531 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvx45\" (UniqueName: \"kubernetes.io/projected/9993aa48-8ca9-4225-8e99-b8a4da9a4dff-kube-api-access-cvx45\") pod \"tigera-operator-c7ccbd65-zrxdj\" (UID: \"9993aa48-8ca9-4225-8e99-b8a4da9a4dff\") " pod="tigera-operator/tigera-operator-c7ccbd65-zrxdj" Feb 13 15:27:35.030950 kubelet[2675]: E0213 15:27:35.030791 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:35.031565 containerd[1504]: time="2025-02-13T15:27:35.031500123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nr2rv,Uid:813a14c5-7e9f-45bb-aa48-c957618044a9,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:35.058372 containerd[1504]: time="2025-02-13T15:27:35.058134830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:35.058372 containerd[1504]: time="2025-02-13T15:27:35.058202278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:35.058372 containerd[1504]: time="2025-02-13T15:27:35.058221133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:35.058372 containerd[1504]: time="2025-02-13T15:27:35.058316474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:35.082563 systemd[1]: Started cri-containerd-7679abd83352bfc9eb7e456c7c80fce3e67f8a09670cd64ebe598a4413bab328.scope - libcontainer container 7679abd83352bfc9eb7e456c7c80fce3e67f8a09670cd64ebe598a4413bab328. Feb 13 15:27:35.105598 containerd[1504]: time="2025-02-13T15:27:35.105468134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nr2rv,Uid:813a14c5-7e9f-45bb-aa48-c957618044a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7679abd83352bfc9eb7e456c7c80fce3e67f8a09670cd64ebe598a4413bab328\"" Feb 13 15:27:35.106730 kubelet[2675]: E0213 15:27:35.106681 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:35.109290 containerd[1504]: time="2025-02-13T15:27:35.109251974Z" level=info msg="CreateContainer within sandbox \"7679abd83352bfc9eb7e456c7c80fce3e67f8a09670cd64ebe598a4413bab328\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:27:35.127068 containerd[1504]: time="2025-02-13T15:27:35.127007251Z" level=info msg="CreateContainer within sandbox \"7679abd83352bfc9eb7e456c7c80fce3e67f8a09670cd64ebe598a4413bab328\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"81e4c6011eaff164277f732993c16911b25df8191ad2e613c1868ab5b8c12a6f\"" Feb 13 15:27:35.127700 containerd[1504]: time="2025-02-13T15:27:35.127576528Z" level=info msg="StartContainer for \"81e4c6011eaff164277f732993c16911b25df8191ad2e613c1868ab5b8c12a6f\"" Feb 13 15:27:35.132133 containerd[1504]: time="2025-02-13T15:27:35.132102552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-zrxdj,Uid:9993aa48-8ca9-4225-8e99-b8a4da9a4dff,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:27:35.156583 systemd[1]: Started cri-containerd-81e4c6011eaff164277f732993c16911b25df8191ad2e613c1868ab5b8c12a6f.scope - libcontainer container 81e4c6011eaff164277f732993c16911b25df8191ad2e613c1868ab5b8c12a6f. Feb 13 15:27:35.159529 containerd[1504]: time="2025-02-13T15:27:35.159210936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:35.159529 containerd[1504]: time="2025-02-13T15:27:35.159307319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:35.159529 containerd[1504]: time="2025-02-13T15:27:35.159327507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:35.161095 containerd[1504]: time="2025-02-13T15:27:35.160723328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:35.182676 systemd[1]: Started cri-containerd-bc7306bb919172808a4cfe87679663cbc98ce342f314b111979553cbdf93d5be.scope - libcontainer container bc7306bb919172808a4cfe87679663cbc98ce342f314b111979553cbdf93d5be. Feb 13 15:27:35.204735 containerd[1504]: time="2025-02-13T15:27:35.204645117Z" level=info msg="StartContainer for \"81e4c6011eaff164277f732993c16911b25df8191ad2e613c1868ab5b8c12a6f\" returns successfully" Feb 13 15:27:35.226755 containerd[1504]: time="2025-02-13T15:27:35.226649363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-zrxdj,Uid:9993aa48-8ca9-4225-8e99-b8a4da9a4dff,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bc7306bb919172808a4cfe87679663cbc98ce342f314b111979553cbdf93d5be\"" Feb 13 15:27:35.228260 containerd[1504]: time="2025-02-13T15:27:35.228234152Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:27:35.235255 kubelet[2675]: E0213 15:27:35.235122 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:35.244536 kubelet[2675]: I0213 15:27:35.244424 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nr2rv" podStartSLOduration=1.244384752 podStartE2EDuration="1.244384752s" podCreationTimestamp="2025-02-13 15:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:35.243804824 +0000 UTC m=+12.134119792" watchObservedRunningTime="2025-02-13 15:27:35.244384752 +0000 UTC m=+12.134699710" Feb 13 15:27:37.028370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3326326966.mount: Deactivated successfully. Feb 13 15:27:37.339628 containerd[1504]: time="2025-02-13T15:27:37.339454893Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:37.340622 containerd[1504]: time="2025-02-13T15:27:37.340582585Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 15:27:37.341827 containerd[1504]: time="2025-02-13T15:27:37.341790959Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:37.344142 containerd[1504]: time="2025-02-13T15:27:37.344081508Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:37.344806 containerd[1504]: time="2025-02-13T15:27:37.344697122Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.1161879s" Feb 13 15:27:37.344806 containerd[1504]: time="2025-02-13T15:27:37.344787312Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 15:27:37.346389 containerd[1504]: time="2025-02-13T15:27:37.346341299Z" level=info msg="CreateContainer within sandbox \"bc7306bb919172808a4cfe87679663cbc98ce342f314b111979553cbdf93d5be\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:27:37.357731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776560441.mount: Deactivated successfully. Feb 13 15:27:37.359323 containerd[1504]: time="2025-02-13T15:27:37.359298420Z" level=info msg="CreateContainer within sandbox \"bc7306bb919172808a4cfe87679663cbc98ce342f314b111979553cbdf93d5be\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dfc6b310862dbbb9fa25d1785bb5c4637ca5871fb6d62f543c1f9f14b0b95427\"" Feb 13 15:27:37.359897 containerd[1504]: time="2025-02-13T15:27:37.359834984Z" level=info msg="StartContainer for \"dfc6b310862dbbb9fa25d1785bb5c4637ca5871fb6d62f543c1f9f14b0b95427\"" Feb 13 15:27:37.394641 systemd[1]: Started cri-containerd-dfc6b310862dbbb9fa25d1785bb5c4637ca5871fb6d62f543c1f9f14b0b95427.scope - libcontainer container dfc6b310862dbbb9fa25d1785bb5c4637ca5871fb6d62f543c1f9f14b0b95427. Feb 13 15:27:37.422584 containerd[1504]: time="2025-02-13T15:27:37.422537442Z" level=info msg="StartContainer for \"dfc6b310862dbbb9fa25d1785bb5c4637ca5871fb6d62f543c1f9f14b0b95427\" returns successfully" Feb 13 15:27:40.359192 kubelet[2675]: I0213 15:27:40.359134 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-zrxdj" podStartSLOduration=4.241761303 podStartE2EDuration="6.359080892s" podCreationTimestamp="2025-02-13 15:27:34 +0000 UTC" firstStartedPulling="2025-02-13 15:27:35.227749385 +0000 UTC m=+12.118064343" lastFinishedPulling="2025-02-13 15:27:37.345068984 +0000 UTC m=+14.235383932" observedRunningTime="2025-02-13 15:27:38.249818734 +0000 UTC m=+15.140133692" watchObservedRunningTime="2025-02-13 15:27:40.359080892 +0000 UTC m=+17.249395871" Feb 13 15:27:40.359897 kubelet[2675]: I0213 15:27:40.359397 2675 topology_manager.go:215] "Topology Admit Handler" podUID="c5aad7ad-4648-4bd9-b096-4f3461474b9b" podNamespace="calico-system" podName="calico-typha-7859dd9854-86p5w" Feb 13 15:27:40.378923 systemd[1]: Created slice kubepods-besteffort-podc5aad7ad_4648_4bd9_b096_4f3461474b9b.slice - libcontainer container kubepods-besteffort-podc5aad7ad_4648_4bd9_b096_4f3461474b9b.slice. Feb 13 15:27:40.403319 kubelet[2675]: I0213 15:27:40.403275 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh7c6\" (UniqueName: \"kubernetes.io/projected/c5aad7ad-4648-4bd9-b096-4f3461474b9b-kube-api-access-zh7c6\") pod \"calico-typha-7859dd9854-86p5w\" (UID: \"c5aad7ad-4648-4bd9-b096-4f3461474b9b\") " pod="calico-system/calico-typha-7859dd9854-86p5w" Feb 13 15:27:40.404162 kubelet[2675]: I0213 15:27:40.403842 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c5aad7ad-4648-4bd9-b096-4f3461474b9b-typha-certs\") pod \"calico-typha-7859dd9854-86p5w\" (UID: \"c5aad7ad-4648-4bd9-b096-4f3461474b9b\") " pod="calico-system/calico-typha-7859dd9854-86p5w" Feb 13 15:27:40.404162 kubelet[2675]: I0213 15:27:40.404098 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5aad7ad-4648-4bd9-b096-4f3461474b9b-tigera-ca-bundle\") pod \"calico-typha-7859dd9854-86p5w\" (UID: \"c5aad7ad-4648-4bd9-b096-4f3461474b9b\") " pod="calico-system/calico-typha-7859dd9854-86p5w" Feb 13 15:27:40.434736 kubelet[2675]: I0213 15:27:40.433946 2675 topology_manager.go:215] "Topology Admit Handler" podUID="362c412b-8fe7-4a9a-89bb-209af0119fc6" podNamespace="calico-system" podName="calico-node-26nw9" Feb 13 15:27:40.435790 kubelet[2675]: W0213 15:27:40.435772 2675 reflector.go:539] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Feb 13 15:27:40.436092 kubelet[2675]: E0213 15:27:40.436077 2675 reflector.go:147] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Feb 13 15:27:40.443048 systemd[1]: Created slice kubepods-besteffort-pod362c412b_8fe7_4a9a_89bb_209af0119fc6.slice - libcontainer container kubepods-besteffort-pod362c412b_8fe7_4a9a_89bb_209af0119fc6.slice. Feb 13 15:27:40.504591 kubelet[2675]: I0213 15:27:40.504545 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/362c412b-8fe7-4a9a-89bb-209af0119fc6-node-certs\") pod \"calico-node-26nw9\" (UID: \"362c412b-8fe7-4a9a-89bb-209af0119fc6\") " pod="calico-system/calico-node-26nw9" Feb 13 15:27:40.504591 kubelet[2675]: I0213 15:27:40.504581 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/362c412b-8fe7-4a9a-89bb-209af0119fc6-cni-net-dir\") pod \"calico-node-26nw9\" (UID: \"362c412b-8fe7-4a9a-89bb-209af0119fc6\") " pod="calico-system/calico-node-26nw9" Feb 13 15:27:40.504591 kubelet[2675]: I0213 15:27:40.504602 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/362c412b-8fe7-4a9a-89bb-209af0119fc6-flexvol-driver-host\") pod \"calico-node-26nw9\" (UID: \"362c412b-8fe7-4a9a-89bb-209af0119fc6\") " pod="calico-system/calico-node-26nw9" Feb 13 15:27:40.504787 kubelet[2675]: I0213 15:27:40.504625 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/362c412b-8fe7-4a9a-89bb-209af0119fc6-var-lib-calico\") pod \"calico-node-26nw9\" (UID: \"362c412b-8fe7-4a9a-89bb-209af0119fc6\") " pod="calico-system/calico-node-26nw9" Feb 13 15:27:40.504787 kubelet[2675]: I0213 15:27:40.504645 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/362c412b-8fe7-4a9a-89bb-209af0119fc6-cni-bin-dir\") pod \"calico-node-26nw9\" (UID: \"362c412b-8fe7-4a9a-89bb-209af0119fc6\") " pod="calico-system/calico-node-26nw9" Feb 13 15:27:40.504853 kubelet[2675]: I0213 15:27:40.504807 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/362c412b-8fe7-4a9a-89bb-209af0119fc6-cni-log-dir\") pod \"calico-node-26nw9\" (UID: \"362c412b-8fe7-4a9a-89bb-209af0119fc6\") " pod="calico-system/calico-node-26nw9" Feb 13 15:27:40.504877 kubelet[2675]: I0213 15:27:40.504866 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/362c412b-8fe7-4a9a-89bb-209af0119fc6-var-run-calico\") pod \"calico-node-26nw9\" (UID: \"362c412b-8fe7-4a9a-89bb-209af0119fc6\") " pod="calico-system/calico-node-26nw9" Feb 13 15:27:40.504908 kubelet[2675]: I0213 15:27:40.504889 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79w97\" (UniqueName: \"kubernetes.io/projected/362c412b-8fe7-4a9a-89bb-209af0119fc6-kube-api-access-79w97\") pod \"calico-node-26nw9\" (UID: \"362c412b-8fe7-4a9a-89bb-209af0119fc6\") " pod="calico-system/calico-node-26nw9" Feb 13 15:27:40.504936 kubelet[2675]: I0213 15:27:40.504927 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/362c412b-8fe7-4a9a-89bb-209af0119fc6-tigera-ca-bundle\") pod \"calico-node-26nw9\" (UID: \"362c412b-8fe7-4a9a-89bb-209af0119fc6\") " pod="calico-system/calico-node-26nw9" Feb 13 15:27:40.505013 kubelet[2675]: I0213 15:27:40.504963 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/362c412b-8fe7-4a9a-89bb-209af0119fc6-xtables-lock\") pod \"calico-node-26nw9\" (UID: \"362c412b-8fe7-4a9a-89bb-209af0119fc6\") " pod="calico-system/calico-node-26nw9" Feb 13 15:27:40.505013 kubelet[2675]: I0213 15:27:40.505006 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/362c412b-8fe7-4a9a-89bb-209af0119fc6-lib-modules\") pod \"calico-node-26nw9\" (UID: \"362c412b-8fe7-4a9a-89bb-209af0119fc6\") " pod="calico-system/calico-node-26nw9" Feb 13 15:27:40.505076 kubelet[2675]: I0213 15:27:40.505025 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/362c412b-8fe7-4a9a-89bb-209af0119fc6-policysync\") pod \"calico-node-26nw9\" (UID: \"362c412b-8fe7-4a9a-89bb-209af0119fc6\") " pod="calico-system/calico-node-26nw9" Feb 13 15:27:40.539289 kubelet[2675]: I0213 15:27:40.539239 2675 topology_manager.go:215] "Topology Admit Handler" podUID="782fda46-0c98-43d2-919a-69ce574b5e7e" podNamespace="calico-system" podName="csi-node-driver-9pw2x" Feb 13 15:27:40.539634 kubelet[2675]: E0213 15:27:40.539611 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9pw2x" podUID="782fda46-0c98-43d2-919a-69ce574b5e7e" Feb 13 15:27:40.606067 kubelet[2675]: I0213 15:27:40.606018 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/782fda46-0c98-43d2-919a-69ce574b5e7e-kubelet-dir\") pod \"csi-node-driver-9pw2x\" (UID: \"782fda46-0c98-43d2-919a-69ce574b5e7e\") " pod="calico-system/csi-node-driver-9pw2x" Feb 13 15:27:40.606232 kubelet[2675]: I0213 15:27:40.606128 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fht8\" (UniqueName: \"kubernetes.io/projected/782fda46-0c98-43d2-919a-69ce574b5e7e-kube-api-access-2fht8\") pod \"csi-node-driver-9pw2x\" (UID: \"782fda46-0c98-43d2-919a-69ce574b5e7e\") " pod="calico-system/csi-node-driver-9pw2x" Feb 13 15:27:40.606506 kubelet[2675]: I0213 15:27:40.606447 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/782fda46-0c98-43d2-919a-69ce574b5e7e-socket-dir\") pod \"csi-node-driver-9pw2x\" (UID: \"782fda46-0c98-43d2-919a-69ce574b5e7e\") " pod="calico-system/csi-node-driver-9pw2x" Feb 13 15:27:40.606933 kubelet[2675]: I0213 15:27:40.606554 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/782fda46-0c98-43d2-919a-69ce574b5e7e-registration-dir\") pod \"csi-node-driver-9pw2x\" (UID: \"782fda46-0c98-43d2-919a-69ce574b5e7e\") " pod="calico-system/csi-node-driver-9pw2x" Feb 13 15:27:40.606933 kubelet[2675]: I0213 15:27:40.606641 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/782fda46-0c98-43d2-919a-69ce574b5e7e-varrun\") pod \"csi-node-driver-9pw2x\" (UID: \"782fda46-0c98-43d2-919a-69ce574b5e7e\") " pod="calico-system/csi-node-driver-9pw2x" Feb 13 15:27:40.609745 kubelet[2675]: E0213 15:27:40.609625 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.609745 kubelet[2675]: W0213 15:27:40.609663 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.609745 kubelet[2675]: E0213 15:27:40.609700 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.688535 kubelet[2675]: E0213 15:27:40.688493 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:40.689375 containerd[1504]: time="2025-02-13T15:27:40.689245905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7859dd9854-86p5w,Uid:c5aad7ad-4648-4bd9-b096-4f3461474b9b,Namespace:calico-system,Attempt:0,}" Feb 13 15:27:40.707167 kubelet[2675]: E0213 15:27:40.707134 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.707167 kubelet[2675]: W0213 15:27:40.707154 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.707264 kubelet[2675]: E0213 15:27:40.707180 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.707590 kubelet[2675]: E0213 15:27:40.707567 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.707590 kubelet[2675]: W0213 15:27:40.707584 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.707590 kubelet[2675]: E0213 15:27:40.707606 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.707971 kubelet[2675]: E0213 15:27:40.707886 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.707971 kubelet[2675]: W0213 15:27:40.707902 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.707971 kubelet[2675]: E0213 15:27:40.707924 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.708314 kubelet[2675]: E0213 15:27:40.708272 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.708314 kubelet[2675]: W0213 15:27:40.708307 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.708416 kubelet[2675]: E0213 15:27:40.708371 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.708800 kubelet[2675]: E0213 15:27:40.708784 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.708850 kubelet[2675]: W0213 15:27:40.708799 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.708850 kubelet[2675]: E0213 15:27:40.708817 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.709287 kubelet[2675]: E0213 15:27:40.709269 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.709287 kubelet[2675]: W0213 15:27:40.709284 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.709398 kubelet[2675]: E0213 15:27:40.709307 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.709637 kubelet[2675]: E0213 15:27:40.709603 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.709637 kubelet[2675]: W0213 15:27:40.709615 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.709718 kubelet[2675]: E0213 15:27:40.709647 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.709852 kubelet[2675]: E0213 15:27:40.709838 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.709852 kubelet[2675]: W0213 15:27:40.709848 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.709931 kubelet[2675]: E0213 15:27:40.709877 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.710069 kubelet[2675]: E0213 15:27:40.710055 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.710069 kubelet[2675]: W0213 15:27:40.710063 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.710141 kubelet[2675]: E0213 15:27:40.710076 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.710311 kubelet[2675]: E0213 15:27:40.710293 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.710311 kubelet[2675]: W0213 15:27:40.710306 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.710418 kubelet[2675]: E0213 15:27:40.710327 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.710647 kubelet[2675]: E0213 15:27:40.710626 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.710647 kubelet[2675]: W0213 15:27:40.710644 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.710722 kubelet[2675]: E0213 15:27:40.710668 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.710967 kubelet[2675]: E0213 15:27:40.710950 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.710967 kubelet[2675]: W0213 15:27:40.710962 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.711050 kubelet[2675]: E0213 15:27:40.710982 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.711215 kubelet[2675]: E0213 15:27:40.711196 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.711215 kubelet[2675]: W0213 15:27:40.711213 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.711285 kubelet[2675]: E0213 15:27:40.711256 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.711497 kubelet[2675]: E0213 15:27:40.711477 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.711497 kubelet[2675]: W0213 15:27:40.711490 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.711600 kubelet[2675]: E0213 15:27:40.711530 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.711775 kubelet[2675]: E0213 15:27:40.711758 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.711775 kubelet[2675]: W0213 15:27:40.711771 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.711848 kubelet[2675]: E0213 15:27:40.711801 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.712010 kubelet[2675]: E0213 15:27:40.711992 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.712010 kubelet[2675]: W0213 15:27:40.712005 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.712083 kubelet[2675]: E0213 15:27:40.712055 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.712288 kubelet[2675]: E0213 15:27:40.712268 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.712288 kubelet[2675]: W0213 15:27:40.712281 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.712371 kubelet[2675]: E0213 15:27:40.712300 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.712604 kubelet[2675]: E0213 15:27:40.712586 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.712604 kubelet[2675]: W0213 15:27:40.712599 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.712692 kubelet[2675]: E0213 15:27:40.712618 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.712862 kubelet[2675]: E0213 15:27:40.712844 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.712862 kubelet[2675]: W0213 15:27:40.712857 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.712931 kubelet[2675]: E0213 15:27:40.712878 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.713144 kubelet[2675]: E0213 15:27:40.713124 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.713144 kubelet[2675]: W0213 15:27:40.713138 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.713232 kubelet[2675]: E0213 15:27:40.713159 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.713463 kubelet[2675]: E0213 15:27:40.713446 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.713463 kubelet[2675]: W0213 15:27:40.713461 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.713537 kubelet[2675]: E0213 15:27:40.713505 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.713721 kubelet[2675]: E0213 15:27:40.713699 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.713721 kubelet[2675]: W0213 15:27:40.713714 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.713782 kubelet[2675]: E0213 15:27:40.713752 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.713958 kubelet[2675]: E0213 15:27:40.713943 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.713958 kubelet[2675]: W0213 15:27:40.713955 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.714011 kubelet[2675]: E0213 15:27:40.713987 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.714191 kubelet[2675]: E0213 15:27:40.714178 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.714216 kubelet[2675]: W0213 15:27:40.714189 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.714216 kubelet[2675]: E0213 15:27:40.714208 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.714474 kubelet[2675]: E0213 15:27:40.714455 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.714474 kubelet[2675]: W0213 15:27:40.714470 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.714532 kubelet[2675]: E0213 15:27:40.714490 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.714798 kubelet[2675]: E0213 15:27:40.714771 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.714798 kubelet[2675]: W0213 15:27:40.714786 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.714798 kubelet[2675]: E0213 15:27:40.714799 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.715096 kubelet[2675]: E0213 15:27:40.715080 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.715096 kubelet[2675]: W0213 15:27:40.715093 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.715224 kubelet[2675]: E0213 15:27:40.715107 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.776331 kubelet[2675]: E0213 15:27:40.775495 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.776331 kubelet[2675]: W0213 15:27:40.775519 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.776995 kubelet[2675]: E0213 15:27:40.776527 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.780296 kubelet[2675]: E0213 15:27:40.780244 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.780296 kubelet[2675]: W0213 15:27:40.780264 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.780296 kubelet[2675]: E0213 15:27:40.780287 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.801863 containerd[1504]: time="2025-02-13T15:27:40.801334869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:40.801863 containerd[1504]: time="2025-02-13T15:27:40.801432744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:40.801863 containerd[1504]: time="2025-02-13T15:27:40.801445227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:40.801863 containerd[1504]: time="2025-02-13T15:27:40.801536530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:40.814735 kubelet[2675]: E0213 15:27:40.814163 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.814735 kubelet[2675]: W0213 15:27:40.814188 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.814735 kubelet[2675]: E0213 15:27:40.814215 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:40.833535 systemd[1]: Started cri-containerd-0f19cab355b2073274a0e3caafc2d52261e101dc642c73b671f3080d21bf1319.scope - libcontainer container 0f19cab355b2073274a0e3caafc2d52261e101dc642c73b671f3080d21bf1319. Feb 13 15:27:40.882306 containerd[1504]: time="2025-02-13T15:27:40.881893007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7859dd9854-86p5w,Uid:c5aad7ad-4648-4bd9-b096-4f3461474b9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f19cab355b2073274a0e3caafc2d52261e101dc642c73b671f3080d21bf1319\"" Feb 13 15:27:40.884174 containerd[1504]: time="2025-02-13T15:27:40.883705037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:27:40.884225 kubelet[2675]: E0213 15:27:40.882647 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:40.916359 kubelet[2675]: E0213 15:27:40.916313 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:40.916359 kubelet[2675]: W0213 15:27:40.916365 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:40.916359 kubelet[2675]: E0213 15:27:40.916393 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:41.017682 kubelet[2675]: E0213 15:27:41.017648 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:41.017682 kubelet[2675]: W0213 15:27:41.017671 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:41.017851 kubelet[2675]: E0213 15:27:41.017693 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:41.118928 kubelet[2675]: E0213 15:27:41.118875 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:41.118928 kubelet[2675]: W0213 15:27:41.118897 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:41.118928 kubelet[2675]: E0213 15:27:41.118920 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:41.219661 kubelet[2675]: E0213 15:27:41.219628 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:41.219661 kubelet[2675]: W0213 15:27:41.219648 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:41.219661 kubelet[2675]: E0213 15:27:41.219668 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:41.320900 kubelet[2675]: E0213 15:27:41.320862 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:41.320900 kubelet[2675]: W0213 15:27:41.320884 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:41.320900 kubelet[2675]: E0213 15:27:41.320907 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:41.421967 kubelet[2675]: E0213 15:27:41.421925 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:41.421967 kubelet[2675]: W0213 15:27:41.421949 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:41.421967 kubelet[2675]: E0213 15:27:41.421974 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:41.523080 kubelet[2675]: E0213 15:27:41.522977 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:41.523080 kubelet[2675]: W0213 15:27:41.522996 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:41.523080 kubelet[2675]: E0213 15:27:41.523019 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:41.541741 kubelet[2675]: E0213 15:27:41.541692 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:41.541741 kubelet[2675]: W0213 15:27:41.541724 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:41.541937 kubelet[2675]: E0213 15:27:41.541753 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:41.646173 kubelet[2675]: E0213 15:27:41.646127 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:41.647161 containerd[1504]: time="2025-02-13T15:27:41.646753119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-26nw9,Uid:362c412b-8fe7-4a9a-89bb-209af0119fc6,Namespace:calico-system,Attempt:0,}" Feb 13 15:27:41.678523 containerd[1504]: time="2025-02-13T15:27:41.678236822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:41.678523 containerd[1504]: time="2025-02-13T15:27:41.678311874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:41.678523 containerd[1504]: time="2025-02-13T15:27:41.678327293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:41.678523 containerd[1504]: time="2025-02-13T15:27:41.678463359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:41.705091 systemd[1]: Started cri-containerd-07a58b272d47a4398d6e05e16a0fef89d512db8fdae0878efb99abb822d55128.scope - libcontainer container 07a58b272d47a4398d6e05e16a0fef89d512db8fdae0878efb99abb822d55128. Feb 13 15:27:41.730337 containerd[1504]: time="2025-02-13T15:27:41.730249941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-26nw9,Uid:362c412b-8fe7-4a9a-89bb-209af0119fc6,Namespace:calico-system,Attempt:0,} returns sandbox id \"07a58b272d47a4398d6e05e16a0fef89d512db8fdae0878efb99abb822d55128\"" Feb 13 15:27:41.731207 kubelet[2675]: E0213 15:27:41.731183 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:42.203232 kubelet[2675]: E0213 15:27:42.203165 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9pw2x" podUID="782fda46-0c98-43d2-919a-69ce574b5e7e" Feb 13 15:27:42.936595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount817286878.mount: Deactivated successfully. Feb 13 15:27:43.201574 containerd[1504]: time="2025-02-13T15:27:43.201419266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:43.202184 containerd[1504]: time="2025-02-13T15:27:43.202138733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 15:27:43.203590 containerd[1504]: time="2025-02-13T15:27:43.203563799Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:43.206147 containerd[1504]: time="2025-02-13T15:27:43.206096485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:43.206809 containerd[1504]: time="2025-02-13T15:27:43.206770094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.32302987s" Feb 13 15:27:43.206809 containerd[1504]: time="2025-02-13T15:27:43.206800332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 15:27:43.207443 containerd[1504]: time="2025-02-13T15:27:43.207254608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:27:43.219214 containerd[1504]: time="2025-02-13T15:27:43.219130860Z" level=info msg="CreateContainer within sandbox \"0f19cab355b2073274a0e3caafc2d52261e101dc642c73b671f3080d21bf1319\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:27:43.233262 containerd[1504]: time="2025-02-13T15:27:43.233217920Z" level=info msg="CreateContainer within sandbox \"0f19cab355b2073274a0e3caafc2d52261e101dc642c73b671f3080d21bf1319\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"72780354c4c197199ba6d01202577e036def3433450536b2a84cfe37ec2a12b4\"" Feb 13 15:27:43.233725 containerd[1504]: time="2025-02-13T15:27:43.233702343Z" level=info msg="StartContainer for \"72780354c4c197199ba6d01202577e036def3433450536b2a84cfe37ec2a12b4\"" Feb 13 15:27:43.262513 systemd[1]: Started cri-containerd-72780354c4c197199ba6d01202577e036def3433450536b2a84cfe37ec2a12b4.scope - libcontainer container 72780354c4c197199ba6d01202577e036def3433450536b2a84cfe37ec2a12b4. Feb 13 15:27:43.307209 containerd[1504]: time="2025-02-13T15:27:43.307167454Z" level=info msg="StartContainer for \"72780354c4c197199ba6d01202577e036def3433450536b2a84cfe37ec2a12b4\" returns successfully" Feb 13 15:27:44.208102 kubelet[2675]: E0213 15:27:44.208030 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9pw2x" podUID="782fda46-0c98-43d2-919a-69ce574b5e7e" Feb 13 15:27:44.257018 kubelet[2675]: E0213 15:27:44.256986 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:44.279632 kubelet[2675]: I0213 15:27:44.279575 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7859dd9854-86p5w" podStartSLOduration=1.955927887 podStartE2EDuration="4.279526721s" podCreationTimestamp="2025-02-13 15:27:40 +0000 UTC" firstStartedPulling="2025-02-13 15:27:40.883434176 +0000 UTC m=+17.773749134" lastFinishedPulling="2025-02-13 15:27:43.20703301 +0000 UTC m=+20.097347968" observedRunningTime="2025-02-13 15:27:44.268273281 +0000 UTC m=+21.158588259" watchObservedRunningTime="2025-02-13 15:27:44.279526721 +0000 UTC m=+21.169841679" Feb 13 15:27:44.318843 kubelet[2675]: E0213 15:27:44.318800 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.318843 kubelet[2675]: W0213 15:27:44.318820 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.318843 kubelet[2675]: E0213 15:27:44.318840 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.319028 kubelet[2675]: E0213 15:27:44.319022 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.319056 kubelet[2675]: W0213 15:27:44.319030 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.319056 kubelet[2675]: E0213 15:27:44.319043 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.319243 kubelet[2675]: E0213 15:27:44.319224 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.319243 kubelet[2675]: W0213 15:27:44.319235 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.319310 kubelet[2675]: E0213 15:27:44.319247 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.319461 kubelet[2675]: E0213 15:27:44.319440 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.319461 kubelet[2675]: W0213 15:27:44.319452 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.319461 kubelet[2675]: E0213 15:27:44.319463 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.319658 kubelet[2675]: E0213 15:27:44.319639 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.319658 kubelet[2675]: W0213 15:27:44.319650 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.319730 kubelet[2675]: E0213 15:27:44.319661 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.319839 kubelet[2675]: E0213 15:27:44.319820 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.319839 kubelet[2675]: W0213 15:27:44.319831 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.319912 kubelet[2675]: E0213 15:27:44.319843 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.320018 kubelet[2675]: E0213 15:27:44.320004 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.320018 kubelet[2675]: W0213 15:27:44.320015 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.320150 kubelet[2675]: E0213 15:27:44.320026 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.320210 kubelet[2675]: E0213 15:27:44.320196 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.320210 kubelet[2675]: W0213 15:27:44.320206 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.320273 kubelet[2675]: E0213 15:27:44.320218 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.320475 kubelet[2675]: E0213 15:27:44.320454 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.320475 kubelet[2675]: W0213 15:27:44.320464 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.320475 kubelet[2675]: E0213 15:27:44.320476 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.320700 kubelet[2675]: E0213 15:27:44.320678 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.320700 kubelet[2675]: W0213 15:27:44.320689 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.320700 kubelet[2675]: E0213 15:27:44.320701 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.320904 kubelet[2675]: E0213 15:27:44.320885 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.320904 kubelet[2675]: W0213 15:27:44.320896 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.320978 kubelet[2675]: E0213 15:27:44.320907 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.321121 kubelet[2675]: E0213 15:27:44.321100 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.321121 kubelet[2675]: W0213 15:27:44.321111 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.321121 kubelet[2675]: E0213 15:27:44.321123 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.321344 kubelet[2675]: E0213 15:27:44.321315 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.321344 kubelet[2675]: W0213 15:27:44.321327 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.321344 kubelet[2675]: E0213 15:27:44.321338 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.321584 kubelet[2675]: E0213 15:27:44.321562 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.321584 kubelet[2675]: W0213 15:27:44.321573 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.321584 kubelet[2675]: E0213 15:27:44.321585 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.321792 kubelet[2675]: E0213 15:27:44.321772 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.321792 kubelet[2675]: W0213 15:27:44.321782 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.321792 kubelet[2675]: E0213 15:27:44.321793 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.338091 kubelet[2675]: E0213 15:27:44.338064 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.338091 kubelet[2675]: W0213 15:27:44.338078 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.338091 kubelet[2675]: E0213 15:27:44.338090 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.338355 kubelet[2675]: E0213 15:27:44.338330 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.338403 kubelet[2675]: W0213 15:27:44.338345 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.338403 kubelet[2675]: E0213 15:27:44.338392 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.338654 kubelet[2675]: E0213 15:27:44.338630 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.338654 kubelet[2675]: W0213 15:27:44.338643 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.338701 kubelet[2675]: E0213 15:27:44.338662 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.338931 kubelet[2675]: E0213 15:27:44.338907 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.338931 kubelet[2675]: W0213 15:27:44.338921 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.338983 kubelet[2675]: E0213 15:27:44.338942 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.339167 kubelet[2675]: E0213 15:27:44.339145 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.339167 kubelet[2675]: W0213 15:27:44.339157 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.339218 kubelet[2675]: E0213 15:27:44.339175 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.339411 kubelet[2675]: E0213 15:27:44.339393 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.339411 kubelet[2675]: W0213 15:27:44.339405 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.339471 kubelet[2675]: E0213 15:27:44.339422 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.339665 kubelet[2675]: E0213 15:27:44.339646 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.339665 kubelet[2675]: W0213 15:27:44.339659 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.339704 kubelet[2675]: E0213 15:27:44.339676 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.339895 kubelet[2675]: E0213 15:27:44.339880 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.339895 kubelet[2675]: W0213 15:27:44.339891 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.339955 kubelet[2675]: E0213 15:27:44.339908 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.340125 kubelet[2675]: E0213 15:27:44.340112 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.340152 kubelet[2675]: W0213 15:27:44.340124 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.340152 kubelet[2675]: E0213 15:27:44.340141 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.340342 kubelet[2675]: E0213 15:27:44.340327 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.340342 kubelet[2675]: W0213 15:27:44.340339 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.340479 kubelet[2675]: E0213 15:27:44.340371 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.340609 kubelet[2675]: E0213 15:27:44.340596 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.340609 kubelet[2675]: W0213 15:27:44.340606 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.340660 kubelet[2675]: E0213 15:27:44.340622 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.340957 kubelet[2675]: E0213 15:27:44.340937 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.340957 kubelet[2675]: W0213 15:27:44.340952 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.341015 kubelet[2675]: E0213 15:27:44.340970 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.341252 kubelet[2675]: E0213 15:27:44.341234 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.341252 kubelet[2675]: W0213 15:27:44.341248 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.341301 kubelet[2675]: E0213 15:27:44.341265 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.341492 kubelet[2675]: E0213 15:27:44.341479 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.341492 kubelet[2675]: W0213 15:27:44.341490 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.341554 kubelet[2675]: E0213 15:27:44.341506 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.341728 kubelet[2675]: E0213 15:27:44.341715 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.341728 kubelet[2675]: W0213 15:27:44.341726 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.341781 kubelet[2675]: E0213 15:27:44.341743 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.341953 kubelet[2675]: E0213 15:27:44.341941 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.341953 kubelet[2675]: W0213 15:27:44.341951 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.342003 kubelet[2675]: E0213 15:27:44.341967 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.342178 kubelet[2675]: E0213 15:27:44.342165 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.342178 kubelet[2675]: W0213 15:27:44.342176 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.342225 kubelet[2675]: E0213 15:27:44.342186 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.342527 kubelet[2675]: E0213 15:27:44.342512 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:44.342527 kubelet[2675]: W0213 15:27:44.342524 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:44.342595 kubelet[2675]: E0213 15:27:44.342534 2675 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:44.968320 containerd[1504]: time="2025-02-13T15:27:44.968250743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:44.969244 containerd[1504]: time="2025-02-13T15:27:44.969196636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 15:27:44.970628 containerd[1504]: time="2025-02-13T15:27:44.970574583Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:44.975208 containerd[1504]: time="2025-02-13T15:27:44.975159875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:44.975983 containerd[1504]: time="2025-02-13T15:27:44.975937170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.768652426s" Feb 13 15:27:44.976066 containerd[1504]: time="2025-02-13T15:27:44.975983878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 15:27:44.979404 containerd[1504]: time="2025-02-13T15:27:44.979333050Z" level=info msg="CreateContainer within sandbox \"07a58b272d47a4398d6e05e16a0fef89d512db8fdae0878efb99abb822d55128\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:27:45.068117 containerd[1504]: time="2025-02-13T15:27:45.068064890Z" level=info msg="CreateContainer within sandbox \"07a58b272d47a4398d6e05e16a0fef89d512db8fdae0878efb99abb822d55128\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d905fbc106c271270cafee50f774ab6a6f788bae947eb963066d3a0ea7a95e21\"" Feb 13 15:27:45.068639 containerd[1504]: time="2025-02-13T15:27:45.068617522Z" level=info msg="StartContainer for \"d905fbc106c271270cafee50f774ab6a6f788bae947eb963066d3a0ea7a95e21\"" Feb 13 15:27:45.101539 systemd[1]: Started cri-containerd-d905fbc106c271270cafee50f774ab6a6f788bae947eb963066d3a0ea7a95e21.scope - libcontainer container d905fbc106c271270cafee50f774ab6a6f788bae947eb963066d3a0ea7a95e21. Feb 13 15:27:45.137850 containerd[1504]: time="2025-02-13T15:27:45.137802209Z" level=info msg="StartContainer for \"d905fbc106c271270cafee50f774ab6a6f788bae947eb963066d3a0ea7a95e21\" returns successfully" Feb 13 15:27:45.152074 systemd[1]: cri-containerd-d905fbc106c271270cafee50f774ab6a6f788bae947eb963066d3a0ea7a95e21.scope: Deactivated successfully. Feb 13 15:27:45.176343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d905fbc106c271270cafee50f774ab6a6f788bae947eb963066d3a0ea7a95e21-rootfs.mount: Deactivated successfully. Feb 13 15:27:45.260681 kubelet[2675]: E0213 15:27:45.260546 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:45.261868 kubelet[2675]: E0213 15:27:45.261511 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:45.320832 containerd[1504]: time="2025-02-13T15:27:45.320759725Z" level=info msg="shim disconnected" id=d905fbc106c271270cafee50f774ab6a6f788bae947eb963066d3a0ea7a95e21 namespace=k8s.io Feb 13 15:27:45.320832 containerd[1504]: time="2025-02-13T15:27:45.320824207Z" level=warning msg="cleaning up after shim disconnected" id=d905fbc106c271270cafee50f774ab6a6f788bae947eb963066d3a0ea7a95e21 namespace=k8s.io Feb 13 15:27:45.320832 containerd[1504]: time="2025-02-13T15:27:45.320835168Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:27:46.203521 kubelet[2675]: E0213 15:27:46.203455 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9pw2x" podUID="782fda46-0c98-43d2-919a-69ce574b5e7e" Feb 13 15:27:46.262108 kubelet[2675]: E0213 15:27:46.262077 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:46.263321 kubelet[2675]: E0213 15:27:46.262276 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:46.263398 containerd[1504]: time="2025-02-13T15:27:46.262785565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:27:48.203016 kubelet[2675]: E0213 15:27:48.202972 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9pw2x" podUID="782fda46-0c98-43d2-919a-69ce574b5e7e" Feb 13 15:27:50.202936 kubelet[2675]: E0213 15:27:50.202884 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9pw2x" podUID="782fda46-0c98-43d2-919a-69ce574b5e7e" Feb 13 15:27:51.047217 containerd[1504]: time="2025-02-13T15:27:51.047153122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:51.048423 containerd[1504]: time="2025-02-13T15:27:51.048382785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 15:27:51.049944 containerd[1504]: time="2025-02-13T15:27:51.049830269Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:51.055535 containerd[1504]: time="2025-02-13T15:27:51.055127074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:51.055756 containerd[1504]: time="2025-02-13T15:27:51.055684162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.79285825s" Feb 13 15:27:51.055756 containerd[1504]: time="2025-02-13T15:27:51.055711383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 15:27:51.057665 containerd[1504]: time="2025-02-13T15:27:51.057624884Z" level=info msg="CreateContainer within sandbox \"07a58b272d47a4398d6e05e16a0fef89d512db8fdae0878efb99abb822d55128\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:27:51.083435 containerd[1504]: time="2025-02-13T15:27:51.083390657Z" level=info msg="CreateContainer within sandbox \"07a58b272d47a4398d6e05e16a0fef89d512db8fdae0878efb99abb822d55128\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5197224e02dff5e979e6c34a80b332b8378136639e852a5cd1121d49827ec5fc\"" Feb 13 15:27:51.083940 containerd[1504]: time="2025-02-13T15:27:51.083861082Z" level=info msg="StartContainer for \"5197224e02dff5e979e6c34a80b332b8378136639e852a5cd1121d49827ec5fc\"" Feb 13 15:27:51.122508 systemd[1]: Started cri-containerd-5197224e02dff5e979e6c34a80b332b8378136639e852a5cd1121d49827ec5fc.scope - libcontainer container 5197224e02dff5e979e6c34a80b332b8378136639e852a5cd1121d49827ec5fc. Feb 13 15:27:51.165071 containerd[1504]: time="2025-02-13T15:27:51.164941143Z" level=info msg="StartContainer for \"5197224e02dff5e979e6c34a80b332b8378136639e852a5cd1121d49827ec5fc\" returns successfully" Feb 13 15:27:51.277399 kubelet[2675]: E0213 15:27:51.277096 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:52.203056 kubelet[2675]: E0213 15:27:52.203010 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9pw2x" podUID="782fda46-0c98-43d2-919a-69ce574b5e7e" Feb 13 15:27:52.278273 kubelet[2675]: E0213 15:27:52.278229 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:52.323946 systemd[1]: cri-containerd-5197224e02dff5e979e6c34a80b332b8378136639e852a5cd1121d49827ec5fc.scope: Deactivated successfully. Feb 13 15:27:52.344955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5197224e02dff5e979e6c34a80b332b8378136639e852a5cd1121d49827ec5fc-rootfs.mount: Deactivated successfully. Feb 13 15:27:52.409110 kubelet[2675]: I0213 15:27:52.409069 2675 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:27:52.445109 kubelet[2675]: I0213 15:27:52.445062 2675 topology_manager.go:215] "Topology Admit Handler" podUID="dbe19860-015e-4dde-822b-2e8f3262322d" podNamespace="kube-system" podName="coredns-76f75df574-fxmwv" Feb 13 15:27:52.447591 kubelet[2675]: I0213 15:27:52.447526 2675 topology_manager.go:215] "Topology Admit Handler" podUID="18b79007-649b-4cb6-ba9e-fdcdc956535a" podNamespace="kube-system" podName="coredns-76f75df574-xr95c" Feb 13 15:27:52.450431 kubelet[2675]: I0213 15:27:52.450395 2675 topology_manager.go:215] "Topology Admit Handler" podUID="baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54" podNamespace="calico-apiserver" podName="calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:52.452820 kubelet[2675]: I0213 15:27:52.452759 2675 topology_manager.go:215] "Topology Admit Handler" podUID="8ed65fe4-b6cb-4506-afa3-9bfead75ba87" podNamespace="calico-apiserver" podName="calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:52.452917 kubelet[2675]: I0213 15:27:52.452875 2675 topology_manager.go:215] "Topology Admit Handler" podUID="b8166788-4099-451f-b170-059d6e53e935" podNamespace="calico-system" podName="calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:52.456738 systemd[1]: Created slice kubepods-burstable-poddbe19860_015e_4dde_822b_2e8f3262322d.slice - libcontainer container kubepods-burstable-poddbe19860_015e_4dde_822b_2e8f3262322d.slice. Feb 13 15:27:52.462761 systemd[1]: Created slice kubepods-burstable-pod18b79007_649b_4cb6_ba9e_fdcdc956535a.slice - libcontainer container kubepods-burstable-pod18b79007_649b_4cb6_ba9e_fdcdc956535a.slice. Feb 13 15:27:52.467572 systemd[1]: Created slice kubepods-besteffort-podbaf6ef68_8ec3_4ba8_a759_b2c5d4c63b54.slice - libcontainer container kubepods-besteffort-podbaf6ef68_8ec3_4ba8_a759_b2c5d4c63b54.slice. Feb 13 15:27:52.472191 systemd[1]: Created slice kubepods-besteffort-podb8166788_4099_451f_b170_059d6e53e935.slice - libcontainer container kubepods-besteffort-podb8166788_4099_451f_b170_059d6e53e935.slice. Feb 13 15:27:52.476593 systemd[1]: Created slice kubepods-besteffort-pod8ed65fe4_b6cb_4506_afa3_9bfead75ba87.slice - libcontainer container kubepods-besteffort-pod8ed65fe4_b6cb_4506_afa3_9bfead75ba87.slice. Feb 13 15:27:52.489579 kubelet[2675]: I0213 15:27:52.489553 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bxlc\" (UniqueName: \"kubernetes.io/projected/8ed65fe4-b6cb-4506-afa3-9bfead75ba87-kube-api-access-7bxlc\") pod \"calico-apiserver-75c78c8d9f-kw8hn\" (UID: \"8ed65fe4-b6cb-4506-afa3-9bfead75ba87\") " pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:52.489681 kubelet[2675]: I0213 15:27:52.489593 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shn55\" (UniqueName: \"kubernetes.io/projected/18b79007-649b-4cb6-ba9e-fdcdc956535a-kube-api-access-shn55\") pod \"coredns-76f75df574-xr95c\" (UID: \"18b79007-649b-4cb6-ba9e-fdcdc956535a\") " pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:52.489681 kubelet[2675]: I0213 15:27:52.489615 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7kzj\" (UniqueName: \"kubernetes.io/projected/dbe19860-015e-4dde-822b-2e8f3262322d-kube-api-access-j7kzj\") pod \"coredns-76f75df574-fxmwv\" (UID: \"dbe19860-015e-4dde-822b-2e8f3262322d\") " pod="kube-system/coredns-76f75df574-fxmwv" Feb 13 15:27:52.489681 kubelet[2675]: I0213 15:27:52.489677 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5vbl\" (UniqueName: \"kubernetes.io/projected/baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54-kube-api-access-j5vbl\") pod \"calico-apiserver-75c78c8d9f-dzkbh\" (UID: \"baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54\") " pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:52.489767 kubelet[2675]: I0213 15:27:52.489712 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8166788-4099-451f-b170-059d6e53e935-tigera-ca-bundle\") pod \"calico-kube-controllers-5cb78c4b87-jxmw6\" (UID: \"b8166788-4099-451f-b170-059d6e53e935\") " pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:52.489825 kubelet[2675]: I0213 15:27:52.489760 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18b79007-649b-4cb6-ba9e-fdcdc956535a-config-volume\") pod \"coredns-76f75df574-xr95c\" (UID: \"18b79007-649b-4cb6-ba9e-fdcdc956535a\") " pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:52.489825 kubelet[2675]: I0213 15:27:52.489821 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbe19860-015e-4dde-822b-2e8f3262322d-config-volume\") pod \"coredns-76f75df574-fxmwv\" (UID: \"dbe19860-015e-4dde-822b-2e8f3262322d\") " pod="kube-system/coredns-76f75df574-fxmwv" Feb 13 15:27:52.489825 kubelet[2675]: I0213 15:27:52.489846 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54-calico-apiserver-certs\") pod \"calico-apiserver-75c78c8d9f-dzkbh\" (UID: \"baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54\") " pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:52.490226 kubelet[2675]: I0213 15:27:52.489875 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97p2\" (UniqueName: \"kubernetes.io/projected/b8166788-4099-451f-b170-059d6e53e935-kube-api-access-f97p2\") pod \"calico-kube-controllers-5cb78c4b87-jxmw6\" (UID: \"b8166788-4099-451f-b170-059d6e53e935\") " pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:52.490226 kubelet[2675]: I0213 15:27:52.489903 2675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8ed65fe4-b6cb-4506-afa3-9bfead75ba87-calico-apiserver-certs\") pod \"calico-apiserver-75c78c8d9f-kw8hn\" (UID: \"8ed65fe4-b6cb-4506-afa3-9bfead75ba87\") " pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:52.582339 containerd[1504]: time="2025-02-13T15:27:52.581682737Z" level=info msg="shim disconnected" id=5197224e02dff5e979e6c34a80b332b8378136639e852a5cd1121d49827ec5fc namespace=k8s.io Feb 13 15:27:52.582339 containerd[1504]: time="2025-02-13T15:27:52.581743291Z" level=warning msg="cleaning up after shim disconnected" id=5197224e02dff5e979e6c34a80b332b8378136639e852a5cd1121d49827ec5fc namespace=k8s.io Feb 13 15:27:52.582339 containerd[1504]: time="2025-02-13T15:27:52.581754443Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:27:52.760700 kubelet[2675]: E0213 15:27:52.760488 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:52.761179 containerd[1504]: time="2025-02-13T15:27:52.761123402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fxmwv,Uid:dbe19860-015e-4dde-822b-2e8f3262322d,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:52.764980 kubelet[2675]: E0213 15:27:52.764940 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:52.765461 containerd[1504]: time="2025-02-13T15:27:52.765418090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:52.771212 containerd[1504]: time="2025-02-13T15:27:52.771149059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-dzkbh,Uid:baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:27:52.774915 containerd[1504]: time="2025-02-13T15:27:52.774861001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb78c4b87-jxmw6,Uid:b8166788-4099-451f-b170-059d6e53e935,Namespace:calico-system,Attempt:0,}" Feb 13 15:27:52.779822 containerd[1504]: time="2025-02-13T15:27:52.779675187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-kw8hn,Uid:8ed65fe4-b6cb-4506-afa3-9bfead75ba87,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:27:52.897020 containerd[1504]: time="2025-02-13T15:27:52.896913850Z" level=error msg="Failed to destroy network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.897902 containerd[1504]: time="2025-02-13T15:27:52.897800548Z" level=error msg="encountered an error cleaning up failed sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.897902 containerd[1504]: time="2025-02-13T15:27:52.897861543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.898767 containerd[1504]: time="2025-02-13T15:27:52.898665364Z" level=error msg="Failed to destroy network for sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.898916 kubelet[2675]: E0213 15:27:52.898812 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.898916 kubelet[2675]: E0213 15:27:52.898886 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:52.898916 kubelet[2675]: E0213 15:27:52.898918 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:52.899097 kubelet[2675]: E0213 15:27:52.898981 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xr95c_kube-system(18b79007-649b-4cb6-ba9e-fdcdc956535a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xr95c_kube-system(18b79007-649b-4cb6-ba9e-fdcdc956535a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xr95c" podUID="18b79007-649b-4cb6-ba9e-fdcdc956535a" Feb 13 15:27:52.899580 containerd[1504]: time="2025-02-13T15:27:52.899448618Z" level=error msg="encountered an error cleaning up failed sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.899793 containerd[1504]: time="2025-02-13T15:27:52.899769030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fxmwv,Uid:dbe19860-015e-4dde-822b-2e8f3262322d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.900285 kubelet[2675]: E0213 15:27:52.900238 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.900334 kubelet[2675]: E0213 15:27:52.900318 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fxmwv" Feb 13 15:27:52.900482 kubelet[2675]: E0213 15:27:52.900373 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fxmwv" Feb 13 15:27:52.900482 kubelet[2675]: E0213 15:27:52.900444 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-fxmwv_kube-system(dbe19860-015e-4dde-822b-2e8f3262322d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-fxmwv_kube-system(dbe19860-015e-4dde-822b-2e8f3262322d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-fxmwv" podUID="dbe19860-015e-4dde-822b-2e8f3262322d" Feb 13 15:27:52.926326 containerd[1504]: time="2025-02-13T15:27:52.925456688Z" level=error msg="Failed to destroy network for sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.926515 containerd[1504]: time="2025-02-13T15:27:52.926461379Z" level=error msg="encountered an error cleaning up failed sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.926665 containerd[1504]: time="2025-02-13T15:27:52.926632090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-dzkbh,Uid:baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.927060 kubelet[2675]: E0213 15:27:52.927020 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.927115 kubelet[2675]: E0213 15:27:52.927101 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:52.927145 kubelet[2675]: E0213 15:27:52.927126 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:52.927254 kubelet[2675]: E0213 15:27:52.927211 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75c78c8d9f-dzkbh_calico-apiserver(baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75c78c8d9f-dzkbh_calico-apiserver(baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" podUID="baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54" Feb 13 15:27:52.937470 containerd[1504]: time="2025-02-13T15:27:52.937417286Z" level=error msg="Failed to destroy network for sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.938039 containerd[1504]: time="2025-02-13T15:27:52.937995184Z" level=error msg="encountered an error cleaning up failed sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.938187 containerd[1504]: time="2025-02-13T15:27:52.938055366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb78c4b87-jxmw6,Uid:b8166788-4099-451f-b170-059d6e53e935,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.938384 kubelet[2675]: E0213 15:27:52.938343 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.938459 kubelet[2675]: E0213 15:27:52.938418 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:52.938459 kubelet[2675]: E0213 15:27:52.938440 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:52.938512 kubelet[2675]: E0213 15:27:52.938497 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cb78c4b87-jxmw6_calico-system(b8166788-4099-451f-b170-059d6e53e935)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cb78c4b87-jxmw6_calico-system(b8166788-4099-451f-b170-059d6e53e935)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" podUID="b8166788-4099-451f-b170-059d6e53e935" Feb 13 15:27:52.943560 containerd[1504]: time="2025-02-13T15:27:52.943508183Z" level=error msg="Failed to destroy network for sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.943996 containerd[1504]: time="2025-02-13T15:27:52.943965803Z" level=error msg="encountered an error cleaning up failed sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.944070 containerd[1504]: time="2025-02-13T15:27:52.944031688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-kw8hn,Uid:8ed65fe4-b6cb-4506-afa3-9bfead75ba87,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.944288 kubelet[2675]: E0213 15:27:52.944265 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:52.944337 kubelet[2675]: E0213 15:27:52.944317 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:52.944377 kubelet[2675]: E0213 15:27:52.944338 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:52.944445 kubelet[2675]: E0213 15:27:52.944409 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75c78c8d9f-kw8hn_calico-apiserver(8ed65fe4-b6cb-4506-afa3-9bfead75ba87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75c78c8d9f-kw8hn_calico-apiserver(8ed65fe4-b6cb-4506-afa3-9bfead75ba87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" podUID="8ed65fe4-b6cb-4506-afa3-9bfead75ba87" Feb 13 15:27:53.281140 kubelet[2675]: I0213 15:27:53.280659 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf" Feb 13 15:27:53.281702 containerd[1504]: time="2025-02-13T15:27:53.281257334Z" level=info msg="StopPodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\"" Feb 13 15:27:53.281702 containerd[1504]: time="2025-02-13T15:27:53.281521020Z" level=info msg="Ensure that sandbox 9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf in task-service has been cleanup successfully" Feb 13 15:27:53.281780 containerd[1504]: time="2025-02-13T15:27:53.281721827Z" level=info msg="TearDown network for sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" successfully" Feb 13 15:27:53.281780 containerd[1504]: time="2025-02-13T15:27:53.281736294Z" level=info msg="StopPodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" returns successfully" Feb 13 15:27:53.282510 kubelet[2675]: I0213 15:27:53.282218 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447" Feb 13 15:27:53.282560 containerd[1504]: time="2025-02-13T15:27:53.282432113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb78c4b87-jxmw6,Uid:b8166788-4099-451f-b170-059d6e53e935,Namespace:calico-system,Attempt:1,}" Feb 13 15:27:53.283608 containerd[1504]: time="2025-02-13T15:27:53.283196120Z" level=info msg="StopPodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\"" Feb 13 15:27:53.283608 containerd[1504]: time="2025-02-13T15:27:53.283452442Z" level=info msg="Ensure that sandbox a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447 in task-service has been cleanup successfully" Feb 13 15:27:53.283716 containerd[1504]: time="2025-02-13T15:27:53.283666584Z" level=info msg="TearDown network for sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" successfully" Feb 13 15:27:53.283716 containerd[1504]: time="2025-02-13T15:27:53.283681743Z" level=info msg="StopPodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" returns successfully" Feb 13 15:27:53.283831 kubelet[2675]: E0213 15:27:53.283810 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:53.283879 kubelet[2675]: E0213 15:27:53.283872 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:53.284086 containerd[1504]: time="2025-02-13T15:27:53.284052189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fxmwv,Uid:dbe19860-015e-4dde-822b-2e8f3262322d,Namespace:kube-system,Attempt:1,}" Feb 13 15:27:53.284738 containerd[1504]: time="2025-02-13T15:27:53.284714185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:27:53.284961 kubelet[2675]: I0213 15:27:53.284930 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f" Feb 13 15:27:53.285371 containerd[1504]: time="2025-02-13T15:27:53.285329251Z" level=info msg="StopPodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\"" Feb 13 15:27:53.285530 containerd[1504]: time="2025-02-13T15:27:53.285498319Z" level=info msg="Ensure that sandbox 6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f in task-service has been cleanup successfully" Feb 13 15:27:53.285679 kubelet[2675]: I0213 15:27:53.285661 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8" Feb 13 15:27:53.285763 containerd[1504]: time="2025-02-13T15:27:53.285664442Z" level=info msg="TearDown network for sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" successfully" Feb 13 15:27:53.285763 containerd[1504]: time="2025-02-13T15:27:53.285676164Z" level=info msg="StopPodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" returns successfully" Feb 13 15:27:53.286093 containerd[1504]: time="2025-02-13T15:27:53.286071237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-dzkbh,Uid:baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:27:53.286242 containerd[1504]: time="2025-02-13T15:27:53.286213976Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\"" Feb 13 15:27:53.286442 containerd[1504]: time="2025-02-13T15:27:53.286376030Z" level=info msg="Ensure that sandbox cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8 in task-service has been cleanup successfully" Feb 13 15:27:53.286502 kubelet[2675]: I0213 15:27:53.286381 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2" Feb 13 15:27:53.286552 containerd[1504]: time="2025-02-13T15:27:53.286520051Z" level=info msg="TearDown network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" successfully" Feb 13 15:27:53.286552 containerd[1504]: time="2025-02-13T15:27:53.286531201Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" returns successfully" Feb 13 15:27:53.287043 kubelet[2675]: E0213 15:27:53.286664 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:53.287098 containerd[1504]: time="2025-02-13T15:27:53.286720247Z" level=info msg="StopPodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\"" Feb 13 15:27:53.287098 containerd[1504]: time="2025-02-13T15:27:53.286820135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:1,}" Feb 13 15:27:53.287098 containerd[1504]: time="2025-02-13T15:27:53.286919162Z" level=info msg="Ensure that sandbox 20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2 in task-service has been cleanup successfully" Feb 13 15:27:53.287310 containerd[1504]: time="2025-02-13T15:27:53.287285450Z" level=info msg="TearDown network for sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" successfully" Feb 13 15:27:53.287310 containerd[1504]: time="2025-02-13T15:27:53.287304497Z" level=info msg="StopPodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" returns successfully" Feb 13 15:27:53.290436 containerd[1504]: time="2025-02-13T15:27:53.290386833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-kw8hn,Uid:8ed65fe4-b6cb-4506-afa3-9bfead75ba87,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:27:53.345266 systemd[1]: run-netns-cni\x2d6f822810\x2d3392\x2d6ff5\x2dd54d\x2d02a8b6466463.mount: Deactivated successfully. Feb 13 15:27:53.345402 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447-shm.mount: Deactivated successfully. Feb 13 15:27:53.566649 containerd[1504]: time="2025-02-13T15:27:53.566018453Z" level=error msg="Failed to destroy network for sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.568559 containerd[1504]: time="2025-02-13T15:27:53.568534806Z" level=error msg="encountered an error cleaning up failed sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.568690 containerd[1504]: time="2025-02-13T15:27:53.568671843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb78c4b87-jxmw6,Uid:b8166788-4099-451f-b170-059d6e53e935,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.569096 kubelet[2675]: E0213 15:27:53.569071 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.569367 kubelet[2675]: E0213 15:27:53.569252 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:53.569367 kubelet[2675]: E0213 15:27:53.569281 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:53.569604 kubelet[2675]: E0213 15:27:53.569476 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cb78c4b87-jxmw6_calico-system(b8166788-4099-451f-b170-059d6e53e935)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cb78c4b87-jxmw6_calico-system(b8166788-4099-451f-b170-059d6e53e935)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" podUID="b8166788-4099-451f-b170-059d6e53e935" Feb 13 15:27:53.570491 containerd[1504]: time="2025-02-13T15:27:53.569835291Z" level=error msg="Failed to destroy network for sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.571672 containerd[1504]: time="2025-02-13T15:27:53.571641027Z" level=error msg="encountered an error cleaning up failed sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.571847 containerd[1504]: time="2025-02-13T15:27:53.571752948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.572180 kubelet[2675]: E0213 15:27:53.572030 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.572180 kubelet[2675]: E0213 15:27:53.572088 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:53.572180 kubelet[2675]: E0213 15:27:53.572108 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:53.572283 kubelet[2675]: E0213 15:27:53.572156 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xr95c_kube-system(18b79007-649b-4cb6-ba9e-fdcdc956535a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xr95c_kube-system(18b79007-649b-4cb6-ba9e-fdcdc956535a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xr95c" podUID="18b79007-649b-4cb6-ba9e-fdcdc956535a" Feb 13 15:27:53.579443 containerd[1504]: time="2025-02-13T15:27:53.579323586Z" level=error msg="Failed to destroy network for sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.579852 containerd[1504]: time="2025-02-13T15:27:53.579805191Z" level=error msg="encountered an error cleaning up failed sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.579893 containerd[1504]: time="2025-02-13T15:27:53.579862999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-dzkbh,Uid:baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.580547 kubelet[2675]: E0213 15:27:53.580069 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.580547 kubelet[2675]: E0213 15:27:53.580119 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:53.580547 kubelet[2675]: E0213 15:27:53.580141 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:53.580697 kubelet[2675]: E0213 15:27:53.580192 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75c78c8d9f-dzkbh_calico-apiserver(baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75c78c8d9f-dzkbh_calico-apiserver(baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" podUID="baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54" Feb 13 15:27:53.587242 containerd[1504]: time="2025-02-13T15:27:53.587118956Z" level=error msg="Failed to destroy network for sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.587796 containerd[1504]: time="2025-02-13T15:27:53.587671325Z" level=error msg="encountered an error cleaning up failed sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.587796 containerd[1504]: time="2025-02-13T15:27:53.587735335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-kw8hn,Uid:8ed65fe4-b6cb-4506-afa3-9bfead75ba87,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.587961 kubelet[2675]: E0213 15:27:53.587932 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.588010 kubelet[2675]: E0213 15:27:53.587971 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:53.588010 kubelet[2675]: E0213 15:27:53.587991 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:53.588061 kubelet[2675]: E0213 15:27:53.588043 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75c78c8d9f-kw8hn_calico-apiserver(8ed65fe4-b6cb-4506-afa3-9bfead75ba87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75c78c8d9f-kw8hn_calico-apiserver(8ed65fe4-b6cb-4506-afa3-9bfead75ba87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" podUID="8ed65fe4-b6cb-4506-afa3-9bfead75ba87" Feb 13 15:27:53.591632 containerd[1504]: time="2025-02-13T15:27:53.591559987Z" level=error msg="Failed to destroy network for sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.592119 containerd[1504]: time="2025-02-13T15:27:53.592076008Z" level=error msg="encountered an error cleaning up failed sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.592197 containerd[1504]: time="2025-02-13T15:27:53.592153764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fxmwv,Uid:dbe19860-015e-4dde-822b-2e8f3262322d,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.592414 kubelet[2675]: E0213 15:27:53.592389 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:53.592501 kubelet[2675]: E0213 15:27:53.592427 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fxmwv" Feb 13 15:27:53.592501 kubelet[2675]: E0213 15:27:53.592446 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fxmwv" Feb 13 15:27:53.592501 kubelet[2675]: E0213 15:27:53.592493 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-fxmwv_kube-system(dbe19860-015e-4dde-822b-2e8f3262322d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-fxmwv_kube-system(dbe19860-015e-4dde-822b-2e8f3262322d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-fxmwv" podUID="dbe19860-015e-4dde-822b-2e8f3262322d" Feb 13 15:27:54.059061 systemd[1]: Started sshd@7-10.0.0.70:22-10.0.0.1:33118.service - OpenSSH per-connection server daemon (10.0.0.1:33118). Feb 13 15:27:54.104596 sshd[3813]: Accepted publickey for core from 10.0.0.1 port 33118 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:27:54.106267 sshd-session[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:54.110685 systemd-logind[1483]: New session 8 of user core. Feb 13 15:27:54.120502 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:27:54.213884 systemd[1]: Created slice kubepods-besteffort-pod782fda46_0c98_43d2_919a_69ce574b5e7e.slice - libcontainer container kubepods-besteffort-pod782fda46_0c98_43d2_919a_69ce574b5e7e.slice. Feb 13 15:27:54.218128 containerd[1504]: time="2025-02-13T15:27:54.217307875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9pw2x,Uid:782fda46-0c98-43d2-919a-69ce574b5e7e,Namespace:calico-system,Attempt:0,}" Feb 13 15:27:54.252832 sshd[3815]: Connection closed by 10.0.0.1 port 33118 Feb 13 15:27:54.254574 sshd-session[3813]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:54.258897 systemd[1]: sshd@7-10.0.0.70:22-10.0.0.1:33118.service: Deactivated successfully. Feb 13 15:27:54.260887 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:27:54.261693 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:27:54.262951 systemd-logind[1483]: Removed session 8. Feb 13 15:27:54.289506 kubelet[2675]: I0213 15:27:54.289465 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1" Feb 13 15:27:54.290155 containerd[1504]: time="2025-02-13T15:27:54.290125463Z" level=info msg="StopPodSandbox for \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\"" Feb 13 15:27:54.290511 containerd[1504]: time="2025-02-13T15:27:54.290489708Z" level=info msg="Ensure that sandbox 12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1 in task-service has been cleanup successfully" Feb 13 15:27:54.290833 containerd[1504]: time="2025-02-13T15:27:54.290807977Z" level=info msg="TearDown network for sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\" successfully" Feb 13 15:27:54.290833 containerd[1504]: time="2025-02-13T15:27:54.290827133Z" level=info msg="StopPodSandbox for \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\" returns successfully" Feb 13 15:27:54.291048 containerd[1504]: time="2025-02-13T15:27:54.291030395Z" level=info msg="StopPodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\"" Feb 13 15:27:54.291260 containerd[1504]: time="2025-02-13T15:27:54.291228187Z" level=info msg="TearDown network for sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" successfully" Feb 13 15:27:54.291260 containerd[1504]: time="2025-02-13T15:27:54.291248896Z" level=info msg="StopPodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" returns successfully" Feb 13 15:27:54.291780 containerd[1504]: time="2025-02-13T15:27:54.291600928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-dzkbh,Uid:baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:27:54.291985 containerd[1504]: time="2025-02-13T15:27:54.291917733Z" level=error msg="Failed to destroy network for sandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.292235 kubelet[2675]: I0213 15:27:54.292196 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486" Feb 13 15:27:54.292610 containerd[1504]: time="2025-02-13T15:27:54.292576823Z" level=info msg="StopPodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\"" Feb 13 15:27:54.292713 containerd[1504]: time="2025-02-13T15:27:54.292680878Z" level=error msg="encountered an error cleaning up failed sandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.292875 containerd[1504]: time="2025-02-13T15:27:54.292759297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9pw2x,Uid:782fda46-0c98-43d2-919a-69ce574b5e7e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.292875 containerd[1504]: time="2025-02-13T15:27:54.292800073Z" level=info msg="Ensure that sandbox ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486 in task-service has been cleanup successfully" Feb 13 15:27:54.292991 containerd[1504]: time="2025-02-13T15:27:54.292952510Z" level=info msg="TearDown network for sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" successfully" Feb 13 15:27:54.292991 containerd[1504]: time="2025-02-13T15:27:54.292969472Z" level=info msg="StopPodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" returns successfully" Feb 13 15:27:54.293097 kubelet[2675]: E0213 15:27:54.293077 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.293167 kubelet[2675]: E0213 15:27:54.293118 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9pw2x" Feb 13 15:27:54.293167 kubelet[2675]: E0213 15:27:54.293138 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9pw2x" Feb 13 15:27:54.293271 kubelet[2675]: E0213 15:27:54.293174 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9pw2x_calico-system(782fda46-0c98-43d2-919a-69ce574b5e7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9pw2x_calico-system(782fda46-0c98-43d2-919a-69ce574b5e7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9pw2x" podUID="782fda46-0c98-43d2-919a-69ce574b5e7e" Feb 13 15:27:54.293384 containerd[1504]: time="2025-02-13T15:27:54.293165160Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\"" Feb 13 15:27:54.293384 containerd[1504]: time="2025-02-13T15:27:54.293248235Z" level=info msg="TearDown network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" successfully" Feb 13 15:27:54.293384 containerd[1504]: time="2025-02-13T15:27:54.293257683Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" returns successfully" Feb 13 15:27:54.293506 kubelet[2675]: I0213 15:27:54.293428 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2" Feb 13 15:27:54.293506 kubelet[2675]: E0213 15:27:54.293463 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:54.293663 containerd[1504]: time="2025-02-13T15:27:54.293637958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:2,}" Feb 13 15:27:54.293787 containerd[1504]: time="2025-02-13T15:27:54.293727286Z" level=info msg="StopPodSandbox for \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\"" Feb 13 15:27:54.293877 containerd[1504]: time="2025-02-13T15:27:54.293857741Z" level=info msg="Ensure that sandbox 0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2 in task-service has been cleanup successfully" Feb 13 15:27:54.294011 containerd[1504]: time="2025-02-13T15:27:54.293995431Z" level=info msg="TearDown network for sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\" successfully" Feb 13 15:27:54.294011 containerd[1504]: time="2025-02-13T15:27:54.294009978Z" level=info msg="StopPodSandbox for \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\" returns successfully" Feb 13 15:27:54.294364 containerd[1504]: time="2025-02-13T15:27:54.294315953Z" level=info msg="StopPodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\"" Feb 13 15:27:54.294498 containerd[1504]: time="2025-02-13T15:27:54.294468670Z" level=info msg="TearDown network for sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" successfully" Feb 13 15:27:54.294531 containerd[1504]: time="2025-02-13T15:27:54.294498937Z" level=info msg="StopPodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" returns successfully" Feb 13 15:27:54.295123 kubelet[2675]: I0213 15:27:54.294913 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42" Feb 13 15:27:54.295182 containerd[1504]: time="2025-02-13T15:27:54.294982547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-kw8hn,Uid:8ed65fe4-b6cb-4506-afa3-9bfead75ba87,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:27:54.295224 containerd[1504]: time="2025-02-13T15:27:54.295184507Z" level=info msg="StopPodSandbox for \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\"" Feb 13 15:27:54.295368 containerd[1504]: time="2025-02-13T15:27:54.295331572Z" level=info msg="Ensure that sandbox 949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42 in task-service has been cleanup successfully" Feb 13 15:27:54.295553 containerd[1504]: time="2025-02-13T15:27:54.295525808Z" level=info msg="TearDown network for sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\" successfully" Feb 13 15:27:54.295553 containerd[1504]: time="2025-02-13T15:27:54.295541307Z" level=info msg="StopPodSandbox for \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\" returns successfully" Feb 13 15:27:54.295889 containerd[1504]: time="2025-02-13T15:27:54.295868302Z" level=info msg="StopPodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\"" Feb 13 15:27:54.295954 containerd[1504]: time="2025-02-13T15:27:54.295940568Z" level=info msg="TearDown network for sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" successfully" Feb 13 15:27:54.295988 containerd[1504]: time="2025-02-13T15:27:54.295952670Z" level=info msg="StopPodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" returns successfully" Feb 13 15:27:54.296451 kubelet[2675]: I0213 15:27:54.296153 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02" Feb 13 15:27:54.296491 containerd[1504]: time="2025-02-13T15:27:54.296325291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb78c4b87-jxmw6,Uid:b8166788-4099-451f-b170-059d6e53e935,Namespace:calico-system,Attempt:2,}" Feb 13 15:27:54.296585 containerd[1504]: time="2025-02-13T15:27:54.296557928Z" level=info msg="StopPodSandbox for \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\"" Feb 13 15:27:54.296818 containerd[1504]: time="2025-02-13T15:27:54.296783011Z" level=info msg="Ensure that sandbox 90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02 in task-service has been cleanup successfully" Feb 13 15:27:54.296989 containerd[1504]: time="2025-02-13T15:27:54.296964653Z" level=info msg="TearDown network for sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\" successfully" Feb 13 15:27:54.297025 containerd[1504]: time="2025-02-13T15:27:54.296986835Z" level=info msg="StopPodSandbox for \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\" returns successfully" Feb 13 15:27:54.297236 containerd[1504]: time="2025-02-13T15:27:54.297215796Z" level=info msg="StopPodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\"" Feb 13 15:27:54.297317 containerd[1504]: time="2025-02-13T15:27:54.297293512Z" level=info msg="TearDown network for sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" successfully" Feb 13 15:27:54.297317 containerd[1504]: time="2025-02-13T15:27:54.297308630Z" level=info msg="StopPodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" returns successfully" Feb 13 15:27:54.297487 kubelet[2675]: E0213 15:27:54.297472 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:54.297740 containerd[1504]: time="2025-02-13T15:27:54.297711137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fxmwv,Uid:dbe19860-015e-4dde-822b-2e8f3262322d,Namespace:kube-system,Attempt:2,}" Feb 13 15:27:54.346740 systemd[1]: run-netns-cni\x2d3b739dae\x2d9031\x2dc95e\x2d453c\x2dd7fc0a556d64.mount: Deactivated successfully. Feb 13 15:27:54.346885 systemd[1]: run-netns-cni\x2d1c5c80fd\x2d7b43\x2d43ca\x2d0d58\x2d1a4e9cf14169.mount: Deactivated successfully. Feb 13 15:27:54.346989 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42-shm.mount: Deactivated successfully. Feb 13 15:27:54.347097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1-shm.mount: Deactivated successfully. Feb 13 15:27:54.347260 systemd[1]: run-netns-cni\x2d6c6e851c\x2d75cf\x2d82da\x2dac74\x2d8ff6ee524173.mount: Deactivated successfully. Feb 13 15:27:54.347416 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2-shm.mount: Deactivated successfully. Feb 13 15:27:54.347527 systemd[1]: run-netns-cni\x2dbfbf1b20\x2d77c9\x2d7df2\x2de06b\x2da3fd43d5f758.mount: Deactivated successfully. Feb 13 15:27:54.347619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486-shm.mount: Deactivated successfully. Feb 13 15:27:54.472932 containerd[1504]: time="2025-02-13T15:27:54.472818120Z" level=error msg="Failed to destroy network for sandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.475668 containerd[1504]: time="2025-02-13T15:27:54.475612515Z" level=error msg="encountered an error cleaning up failed sandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.475739 containerd[1504]: time="2025-02-13T15:27:54.475698246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-dzkbh,Uid:baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.476266 kubelet[2675]: E0213 15:27:54.476234 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.476381 kubelet[2675]: E0213 15:27:54.476307 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:54.476381 kubelet[2675]: E0213 15:27:54.476335 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:54.476542 kubelet[2675]: E0213 15:27:54.476440 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75c78c8d9f-dzkbh_calico-apiserver(baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75c78c8d9f-dzkbh_calico-apiserver(baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" podUID="baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54" Feb 13 15:27:54.482556 containerd[1504]: time="2025-02-13T15:27:54.482501678Z" level=error msg="Failed to destroy network for sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.484145 containerd[1504]: time="2025-02-13T15:27:54.484040281Z" level=error msg="encountered an error cleaning up failed sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.484145 containerd[1504]: time="2025-02-13T15:27:54.484114321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.484487 kubelet[2675]: E0213 15:27:54.484466 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.484644 kubelet[2675]: E0213 15:27:54.484605 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:54.484771 kubelet[2675]: E0213 15:27:54.484712 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:54.485567 kubelet[2675]: E0213 15:27:54.485039 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xr95c_kube-system(18b79007-649b-4cb6-ba9e-fdcdc956535a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xr95c_kube-system(18b79007-649b-4cb6-ba9e-fdcdc956535a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xr95c" podUID="18b79007-649b-4cb6-ba9e-fdcdc956535a" Feb 13 15:27:54.490787 containerd[1504]: time="2025-02-13T15:27:54.490751871Z" level=error msg="Failed to destroy network for sandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.491849 containerd[1504]: time="2025-02-13T15:27:54.491704472Z" level=error msg="encountered an error cleaning up failed sandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.491849 containerd[1504]: time="2025-02-13T15:27:54.491759847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-kw8hn,Uid:8ed65fe4-b6cb-4506-afa3-9bfead75ba87,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.491989 kubelet[2675]: E0213 15:27:54.491961 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.492038 kubelet[2675]: E0213 15:27:54.492016 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:54.492066 kubelet[2675]: E0213 15:27:54.492051 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:54.492122 kubelet[2675]: E0213 15:27:54.492102 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75c78c8d9f-kw8hn_calico-apiserver(8ed65fe4-b6cb-4506-afa3-9bfead75ba87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75c78c8d9f-kw8hn_calico-apiserver(8ed65fe4-b6cb-4506-afa3-9bfead75ba87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" podUID="8ed65fe4-b6cb-4506-afa3-9bfead75ba87" Feb 13 15:27:54.495070 containerd[1504]: time="2025-02-13T15:27:54.494999478Z" level=error msg="Failed to destroy network for sandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.495591 containerd[1504]: time="2025-02-13T15:27:54.495559150Z" level=error msg="encountered an error cleaning up failed sandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.495654 containerd[1504]: time="2025-02-13T15:27:54.495629523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fxmwv,Uid:dbe19860-015e-4dde-822b-2e8f3262322d,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.495918 kubelet[2675]: E0213 15:27:54.495878 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.495918 kubelet[2675]: E0213 15:27:54.495916 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fxmwv" Feb 13 15:27:54.496002 kubelet[2675]: E0213 15:27:54.495934 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fxmwv" Feb 13 15:27:54.496002 kubelet[2675]: E0213 15:27:54.495974 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-fxmwv_kube-system(dbe19860-015e-4dde-822b-2e8f3262322d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-fxmwv_kube-system(dbe19860-015e-4dde-822b-2e8f3262322d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-fxmwv" podUID="dbe19860-015e-4dde-822b-2e8f3262322d" Feb 13 15:27:54.496179 containerd[1504]: time="2025-02-13T15:27:54.496005049Z" level=error msg="Failed to destroy network for sandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.496395 containerd[1504]: time="2025-02-13T15:27:54.496371728Z" level=error msg="encountered an error cleaning up failed sandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.496431 containerd[1504]: time="2025-02-13T15:27:54.496413657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb78c4b87-jxmw6,Uid:b8166788-4099-451f-b170-059d6e53e935,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.496572 kubelet[2675]: E0213 15:27:54.496554 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:54.496609 kubelet[2675]: E0213 15:27:54.496586 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:54.496609 kubelet[2675]: E0213 15:27:54.496603 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:54.496660 kubelet[2675]: E0213 15:27:54.496640 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cb78c4b87-jxmw6_calico-system(b8166788-4099-451f-b170-059d6e53e935)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cb78c4b87-jxmw6_calico-system(b8166788-4099-451f-b170-059d6e53e935)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" podUID="b8166788-4099-451f-b170-059d6e53e935" Feb 13 15:27:55.301512 kubelet[2675]: I0213 15:27:55.301458 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee" Feb 13 15:27:55.440311 kubelet[2675]: I0213 15:27:55.303995 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740" Feb 13 15:27:55.440311 kubelet[2675]: I0213 15:27:55.322902 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff" Feb 13 15:27:55.440311 kubelet[2675]: E0213 15:27:55.323331 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:55.440311 kubelet[2675]: I0213 15:27:55.324980 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302" Feb 13 15:27:55.440311 kubelet[2675]: I0213 15:27:55.326237 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a" Feb 13 15:27:55.440311 kubelet[2675]: E0213 15:27:55.327304 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:55.440311 kubelet[2675]: I0213 15:27:55.327925 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2" Feb 13 15:27:55.345268 systemd[1]: run-netns-cni\x2d7f8045c6\x2da189\x2d0eda\x2dbdd9\x2d44f76276425e.mount: Deactivated successfully. Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.302078362Z" level=info msg="StopPodSandbox for \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\"" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.302325067Z" level=info msg="Ensure that sandbox 25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee in task-service has been cleanup successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.302722473Z" level=info msg="TearDown network for sandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\" successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.302739315Z" level=info msg="StopPodSandbox for \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\" returns successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.303131832Z" level=info msg="StopPodSandbox for \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\"" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.303225008Z" level=info msg="TearDown network for sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\" successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.303235298Z" level=info msg="StopPodSandbox for \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\" returns successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.303739245Z" level=info msg="StopPodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\"" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.303819877Z" level=info msg="TearDown network for sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.303834093Z" level=info msg="StopPodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" returns successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.321896381Z" level=info msg="StopPodSandbox for \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\"" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.321903634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-dzkbh,Uid:baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.322103058Z" level=info msg="Ensure that sandbox 983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740 in task-service has been cleanup successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.322334795Z" level=info msg="TearDown network for sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\" successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.322372135Z" level=info msg="StopPodSandbox for \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\" returns successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.322616634Z" level=info msg="StopPodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\"" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.322709639Z" level=info msg="TearDown network for sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.322751818Z" level=info msg="StopPodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" returns successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.322989856Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\"" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.323063965Z" level=info msg="TearDown network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.323073553Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" returns successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.323481791Z" level=info msg="StopPodSandbox for \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\"" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.323686315Z" level=info msg="Ensure that sandbox e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff in task-service has been cleanup successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.323908022Z" level=info msg="TearDown network for sandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\" successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.323935704Z" level=info msg="StopPodSandbox for \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\" returns successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.324071610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:3,}" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.324281635Z" level=info msg="StopPodSandbox for \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\"" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.324406218Z" level=info msg="TearDown network for sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\" successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.324428139Z" level=info msg="StopPodSandbox for \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\" returns successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.324785963Z" level=info msg="StopPodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\"" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.324874599Z" level=info msg="TearDown network for sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.324887633Z" level=info msg="StopPodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" returns successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.325227883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-kw8hn,Uid:8ed65fe4-b6cb-4506-afa3-9bfead75ba87,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.325493252Z" level=info msg="StopPodSandbox for \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\"" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.325616955Z" level=info msg="Ensure that sandbox a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302 in task-service has been cleanup successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.325753782Z" level=info msg="TearDown network for sandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\" successfully" Feb 13 15:27:55.441029 containerd[1504]: time="2025-02-13T15:27:55.325766186Z" level=info msg="StopPodSandbox for \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\" returns successfully" Feb 13 15:27:55.345378 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302-shm.mount: Deactivated successfully. Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.325940372Z" level=info msg="StopPodSandbox for \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\"" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.326006858Z" level=info msg="TearDown network for sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\" successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.326015324Z" level=info msg="StopPodSandbox for \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\" returns successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.326700121Z" level=info msg="StopPodSandbox for \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\"" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.326731510Z" level=info msg="StopPodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\"" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.326860613Z" level=info msg="Ensure that sandbox 641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a in task-service has been cleanup successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.326927288Z" level=info msg="TearDown network for sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.326937798Z" level=info msg="StopPodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" returns successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.327034670Z" level=info msg="TearDown network for sandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\" successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.327052042Z" level=info msg="StopPodSandbox for \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\" returns successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.327414644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9pw2x,Uid:782fda46-0c98-43d2-919a-69ce574b5e7e,Namespace:calico-system,Attempt:1,}" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.327524381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fxmwv,Uid:dbe19860-015e-4dde-822b-2e8f3262322d,Namespace:kube-system,Attempt:3,}" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.328287124Z" level=info msg="StopPodSandbox for \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\"" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.328472052Z" level=info msg="Ensure that sandbox abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2 in task-service has been cleanup successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.328613959Z" level=info msg="TearDown network for sandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\" successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.328625170Z" level=info msg="StopPodSandbox for \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\" returns successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.328846967Z" level=info msg="StopPodSandbox for \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\"" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.328922459Z" level=info msg="TearDown network for sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\" successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.328931546Z" level=info msg="StopPodSandbox for \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\" returns successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.329106505Z" level=info msg="StopPodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\"" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.329206422Z" level=info msg="TearDown network for sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.329243994Z" level=info msg="StopPodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" returns successfully" Feb 13 15:27:55.442258 containerd[1504]: time="2025-02-13T15:27:55.329567542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb78c4b87-jxmw6,Uid:b8166788-4099-451f-b170-059d6e53e935,Namespace:calico-system,Attempt:3,}" Feb 13 15:27:55.345458 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2-shm.mount: Deactivated successfully. Feb 13 15:27:55.345531 systemd[1]: run-netns-cni\x2dda8ca667\x2d0df6\x2d4f78\x2ddfd8\x2df94fc23cc463.mount: Deactivated successfully. Feb 13 15:27:55.345611 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740-shm.mount: Deactivated successfully. Feb 13 15:27:55.345683 systemd[1]: run-netns-cni\x2d97499564\x2d592e\x2dfe1b\x2dc897\x2d46c15daf9de5.mount: Deactivated successfully. Feb 13 15:27:55.345752 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee-shm.mount: Deactivated successfully. Feb 13 15:27:55.345828 systemd[1]: run-netns-cni\x2dff4842d0\x2d7b16\x2d77ff\x2d2130\x2d141a4523687b.mount: Deactivated successfully. Feb 13 15:27:55.345898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff-shm.mount: Deactivated successfully. Feb 13 15:27:55.345971 systemd[1]: run-netns-cni\x2d2d03ce9d\x2d8ead\x2d96cf\x2df3aa\x2db3e1c9834851.mount: Deactivated successfully. Feb 13 15:27:56.314884 containerd[1504]: time="2025-02-13T15:27:56.314783235Z" level=error msg="Failed to destroy network for sandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.315453 containerd[1504]: time="2025-02-13T15:27:56.315290599Z" level=error msg="encountered an error cleaning up failed sandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.315453 containerd[1504]: time="2025-02-13T15:27:56.315377563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.315839 kubelet[2675]: E0213 15:27:56.315619 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.315839 kubelet[2675]: E0213 15:27:56.315693 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:56.315839 kubelet[2675]: E0213 15:27:56.315719 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:56.316323 kubelet[2675]: E0213 15:27:56.315791 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xr95c_kube-system(18b79007-649b-4cb6-ba9e-fdcdc956535a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xr95c_kube-system(18b79007-649b-4cb6-ba9e-fdcdc956535a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xr95c" podUID="18b79007-649b-4cb6-ba9e-fdcdc956535a" Feb 13 15:27:56.334801 kubelet[2675]: I0213 15:27:56.333320 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436" Feb 13 15:27:56.335464 containerd[1504]: time="2025-02-13T15:27:56.335412813Z" level=info msg="StopPodSandbox for \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\"" Feb 13 15:27:56.337177 containerd[1504]: time="2025-02-13T15:27:56.337141763Z" level=info msg="Ensure that sandbox 4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436 in task-service has been cleanup successfully" Feb 13 15:27:56.338497 containerd[1504]: time="2025-02-13T15:27:56.338466693Z" level=info msg="TearDown network for sandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\" successfully" Feb 13 15:27:56.338828 containerd[1504]: time="2025-02-13T15:27:56.338786805Z" level=info msg="StopPodSandbox for \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\" returns successfully" Feb 13 15:27:56.340486 containerd[1504]: time="2025-02-13T15:27:56.340464709Z" level=info msg="StopPodSandbox for \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\"" Feb 13 15:27:56.341219 containerd[1504]: time="2025-02-13T15:27:56.341200522Z" level=info msg="TearDown network for sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\" successfully" Feb 13 15:27:56.341934 containerd[1504]: time="2025-02-13T15:27:56.341916979Z" level=info msg="StopPodSandbox for \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\" returns successfully" Feb 13 15:27:56.342973 containerd[1504]: time="2025-02-13T15:27:56.342850874Z" level=info msg="StopPodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\"" Feb 13 15:27:56.342973 containerd[1504]: time="2025-02-13T15:27:56.342925755Z" level=info msg="TearDown network for sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" successfully" Feb 13 15:27:56.342973 containerd[1504]: time="2025-02-13T15:27:56.342934582Z" level=info msg="StopPodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" returns successfully" Feb 13 15:27:56.343735 containerd[1504]: time="2025-02-13T15:27:56.343577240Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\"" Feb 13 15:27:56.343735 containerd[1504]: time="2025-02-13T15:27:56.343667189Z" level=info msg="TearDown network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" successfully" Feb 13 15:27:56.343735 containerd[1504]: time="2025-02-13T15:27:56.343677658Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" returns successfully" Feb 13 15:27:56.343978 kubelet[2675]: E0213 15:27:56.343941 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:56.347024 containerd[1504]: time="2025-02-13T15:27:56.346795069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:4,}" Feb 13 15:27:56.359931 systemd[1]: run-netns-cni\x2d33347814\x2d6262\x2db9fd\x2d14bc\x2d6a1502615625.mount: Deactivated successfully. Feb 13 15:27:56.360273 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436-shm.mount: Deactivated successfully. Feb 13 15:27:56.378538 containerd[1504]: time="2025-02-13T15:27:56.378475327Z" level=error msg="Failed to destroy network for sandbox \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.380846 containerd[1504]: time="2025-02-13T15:27:56.380713134Z" level=error msg="encountered an error cleaning up failed sandbox \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.380969 containerd[1504]: time="2025-02-13T15:27:56.380947154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-kw8hn,Uid:8ed65fe4-b6cb-4506-afa3-9bfead75ba87,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.381465 kubelet[2675]: E0213 15:27:56.381411 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.381528 kubelet[2675]: E0213 15:27:56.381472 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:56.381528 kubelet[2675]: E0213 15:27:56.381494 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:56.381745 kubelet[2675]: E0213 15:27:56.381645 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75c78c8d9f-kw8hn_calico-apiserver(8ed65fe4-b6cb-4506-afa3-9bfead75ba87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75c78c8d9f-kw8hn_calico-apiserver(8ed65fe4-b6cb-4506-afa3-9bfead75ba87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" podUID="8ed65fe4-b6cb-4506-afa3-9bfead75ba87" Feb 13 15:27:56.384242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b-shm.mount: Deactivated successfully. Feb 13 15:27:56.389417 containerd[1504]: time="2025-02-13T15:27:56.385585412Z" level=error msg="Failed to destroy network for sandbox \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.389417 containerd[1504]: time="2025-02-13T15:27:56.386328499Z" level=error msg="encountered an error cleaning up failed sandbox \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.389417 containerd[1504]: time="2025-02-13T15:27:56.386452372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9pw2x,Uid:782fda46-0c98-43d2-919a-69ce574b5e7e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.388337 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044-shm.mount: Deactivated successfully. Feb 13 15:27:56.389602 kubelet[2675]: E0213 15:27:56.386732 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.389602 kubelet[2675]: E0213 15:27:56.386775 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9pw2x" Feb 13 15:27:56.389602 kubelet[2675]: E0213 15:27:56.386793 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9pw2x" Feb 13 15:27:56.389693 kubelet[2675]: E0213 15:27:56.386901 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9pw2x_calico-system(782fda46-0c98-43d2-919a-69ce574b5e7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9pw2x_calico-system(782fda46-0c98-43d2-919a-69ce574b5e7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9pw2x" podUID="782fda46-0c98-43d2-919a-69ce574b5e7e" Feb 13 15:27:56.402926 containerd[1504]: time="2025-02-13T15:27:56.402862297Z" level=error msg="Failed to destroy network for sandbox \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.404879 containerd[1504]: time="2025-02-13T15:27:56.404845055Z" level=error msg="encountered an error cleaning up failed sandbox \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.405134 containerd[1504]: time="2025-02-13T15:27:56.405102689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-dzkbh,Uid:baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.405395 kubelet[2675]: E0213 15:27:56.405367 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.405490 kubelet[2675]: E0213 15:27:56.405423 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:56.405490 kubelet[2675]: E0213 15:27:56.405443 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:56.405544 kubelet[2675]: E0213 15:27:56.405503 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75c78c8d9f-dzkbh_calico-apiserver(baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75c78c8d9f-dzkbh_calico-apiserver(baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" podUID="baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54" Feb 13 15:27:56.407297 containerd[1504]: time="2025-02-13T15:27:56.407127064Z" level=error msg="Failed to destroy network for sandbox \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.408210 containerd[1504]: time="2025-02-13T15:27:56.408057333Z" level=error msg="encountered an error cleaning up failed sandbox \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.408210 containerd[1504]: time="2025-02-13T15:27:56.408117315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fxmwv,Uid:dbe19860-015e-4dde-822b-2e8f3262322d,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.408321 kubelet[2675]: E0213 15:27:56.408294 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.408477 kubelet[2675]: E0213 15:27:56.408336 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fxmwv" Feb 13 15:27:56.408477 kubelet[2675]: E0213 15:27:56.408374 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fxmwv" Feb 13 15:27:56.408477 kubelet[2675]: E0213 15:27:56.408411 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-fxmwv_kube-system(dbe19860-015e-4dde-822b-2e8f3262322d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-fxmwv_kube-system(dbe19860-015e-4dde-822b-2e8f3262322d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-fxmwv" podUID="dbe19860-015e-4dde-822b-2e8f3262322d" Feb 13 15:27:56.419014 containerd[1504]: time="2025-02-13T15:27:56.418957923Z" level=error msg="Failed to destroy network for sandbox \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.420880 containerd[1504]: time="2025-02-13T15:27:56.420530558Z" level=error msg="encountered an error cleaning up failed sandbox \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.420880 containerd[1504]: time="2025-02-13T15:27:56.420599399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb78c4b87-jxmw6,Uid:b8166788-4099-451f-b170-059d6e53e935,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.421002 kubelet[2675]: E0213 15:27:56.420830 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.421002 kubelet[2675]: E0213 15:27:56.420874 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:56.421002 kubelet[2675]: E0213 15:27:56.420955 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:56.421091 kubelet[2675]: E0213 15:27:56.421012 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cb78c4b87-jxmw6_calico-system(b8166788-4099-451f-b170-059d6e53e935)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cb78c4b87-jxmw6_calico-system(b8166788-4099-451f-b170-059d6e53e935)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" podUID="b8166788-4099-451f-b170-059d6e53e935" Feb 13 15:27:56.449044 containerd[1504]: time="2025-02-13T15:27:56.448998961Z" level=error msg="Failed to destroy network for sandbox \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.449827 containerd[1504]: time="2025-02-13T15:27:56.449764360Z" level=error msg="encountered an error cleaning up failed sandbox \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.449827 containerd[1504]: time="2025-02-13T15:27:56.449824683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.450056 kubelet[2675]: E0213 15:27:56.450014 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:56.450104 kubelet[2675]: E0213 15:27:56.450064 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:56.450104 kubelet[2675]: E0213 15:27:56.450085 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:56.450222 kubelet[2675]: E0213 15:27:56.450133 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xr95c_kube-system(18b79007-649b-4cb6-ba9e-fdcdc956535a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xr95c_kube-system(18b79007-649b-4cb6-ba9e-fdcdc956535a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xr95c" podUID="18b79007-649b-4cb6-ba9e-fdcdc956535a" Feb 13 15:27:57.338005 kubelet[2675]: I0213 15:27:57.337923 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d" Feb 13 15:27:57.339460 containerd[1504]: time="2025-02-13T15:27:57.339402821Z" level=info msg="StopPodSandbox for \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\"" Feb 13 15:27:57.339636 containerd[1504]: time="2025-02-13T15:27:57.339617755Z" level=info msg="Ensure that sandbox 0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d in task-service has been cleanup successfully" Feb 13 15:27:57.339834 containerd[1504]: time="2025-02-13T15:27:57.339812892Z" level=info msg="TearDown network for sandbox \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\" successfully" Feb 13 15:27:57.339865 containerd[1504]: time="2025-02-13T15:27:57.339850993Z" level=info msg="StopPodSandbox for \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\" returns successfully" Feb 13 15:27:57.340497 containerd[1504]: time="2025-02-13T15:27:57.340466039Z" level=info msg="StopPodSandbox for \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\"" Feb 13 15:27:57.340618 containerd[1504]: time="2025-02-13T15:27:57.340564965Z" level=info msg="TearDown network for sandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\" successfully" Feb 13 15:27:57.340618 containerd[1504]: time="2025-02-13T15:27:57.340612083Z" level=info msg="StopPodSandbox for \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\" returns successfully" Feb 13 15:27:57.341930 containerd[1504]: time="2025-02-13T15:27:57.341906927Z" level=info msg="StopPodSandbox for \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\"" Feb 13 15:27:57.342097 containerd[1504]: time="2025-02-13T15:27:57.342065034Z" level=info msg="TearDown network for sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\" successfully" Feb 13 15:27:57.342097 containerd[1504]: time="2025-02-13T15:27:57.342082767Z" level=info msg="StopPodSandbox for \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\" returns successfully" Feb 13 15:27:57.342758 containerd[1504]: time="2025-02-13T15:27:57.342702923Z" level=info msg="StopPodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\"" Feb 13 15:27:57.342856 containerd[1504]: time="2025-02-13T15:27:57.342819372Z" level=info msg="TearDown network for sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" successfully" Feb 13 15:27:57.342921 containerd[1504]: time="2025-02-13T15:27:57.342855980Z" level=info msg="StopPodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" returns successfully" Feb 13 15:27:57.343466 containerd[1504]: time="2025-02-13T15:27:57.343446080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb78c4b87-jxmw6,Uid:b8166788-4099-451f-b170-059d6e53e935,Namespace:calico-system,Attempt:4,}" Feb 13 15:27:57.344312 kubelet[2675]: I0213 15:27:57.343711 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8" Feb 13 15:27:57.344870 containerd[1504]: time="2025-02-13T15:27:57.344846021Z" level=info msg="StopPodSandbox for \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\"" Feb 13 15:27:57.345932 containerd[1504]: time="2025-02-13T15:27:57.345912045Z" level=info msg="Ensure that sandbox dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8 in task-service has been cleanup successfully" Feb 13 15:27:57.346138 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f-shm.mount: Deactivated successfully. Feb 13 15:27:57.346267 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8-shm.mount: Deactivated successfully. Feb 13 15:27:57.346366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7-shm.mount: Deactivated successfully. Feb 13 15:27:57.346593 systemd[1]: run-netns-cni\x2d5d406594\x2d3643\x2d655d\x2df0c3\x2db1af5acba3d4.mount: Deactivated successfully. Feb 13 15:27:57.346679 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d-shm.mount: Deactivated successfully. Feb 13 15:27:57.346991 containerd[1504]: time="2025-02-13T15:27:57.346812987Z" level=info msg="TearDown network for sandbox \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\" successfully" Feb 13 15:27:57.346991 containerd[1504]: time="2025-02-13T15:27:57.346835319Z" level=info msg="StopPodSandbox for \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\" returns successfully" Feb 13 15:27:57.347832 containerd[1504]: time="2025-02-13T15:27:57.347659509Z" level=info msg="StopPodSandbox for \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\"" Feb 13 15:27:57.347832 containerd[1504]: time="2025-02-13T15:27:57.347768964Z" level=info msg="TearDown network for sandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\" successfully" Feb 13 15:27:57.347832 containerd[1504]: time="2025-02-13T15:27:57.347783511Z" level=info msg="StopPodSandbox for \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\" returns successfully" Feb 13 15:27:57.348686 containerd[1504]: time="2025-02-13T15:27:57.348202499Z" level=info msg="StopPodSandbox for \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\"" Feb 13 15:27:57.348686 containerd[1504]: time="2025-02-13T15:27:57.348286356Z" level=info msg="TearDown network for sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\" successfully" Feb 13 15:27:57.348686 containerd[1504]: time="2025-02-13T15:27:57.348296275Z" level=info msg="StopPodSandbox for \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\" returns successfully" Feb 13 15:27:57.351026 containerd[1504]: time="2025-02-13T15:27:57.350786034Z" level=info msg="StopPodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\"" Feb 13 15:27:57.351026 containerd[1504]: time="2025-02-13T15:27:57.350891473Z" level=info msg="TearDown network for sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" successfully" Feb 13 15:27:57.351026 containerd[1504]: time="2025-02-13T15:27:57.350905028Z" level=info msg="StopPodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" returns successfully" Feb 13 15:27:57.352450 containerd[1504]: time="2025-02-13T15:27:57.351667792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-dzkbh,Uid:baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:27:57.352492 kubelet[2675]: I0213 15:27:57.351852 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f" Feb 13 15:27:57.352479 systemd[1]: run-netns-cni\x2dbc783a96\x2db58a\x2de4a5\x2d1339\x2d19cdedf73feb.mount: Deactivated successfully. Feb 13 15:27:57.352622 containerd[1504]: time="2025-02-13T15:27:57.352591618Z" level=info msg="StopPodSandbox for \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\"" Feb 13 15:27:57.357320 kubelet[2675]: I0213 15:27:57.357295 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b" Feb 13 15:27:57.357922 containerd[1504]: time="2025-02-13T15:27:57.357898252Z" level=info msg="StopPodSandbox for \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\"" Feb 13 15:27:57.358503 containerd[1504]: time="2025-02-13T15:27:57.358480065Z" level=info msg="Ensure that sandbox 1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b in task-service has been cleanup successfully" Feb 13 15:27:57.358884 containerd[1504]: time="2025-02-13T15:27:57.358864537Z" level=info msg="TearDown network for sandbox \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\" successfully" Feb 13 15:27:57.358969 containerd[1504]: time="2025-02-13T15:27:57.358951762Z" level=info msg="StopPodSandbox for \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\" returns successfully" Feb 13 15:27:57.359391 containerd[1504]: time="2025-02-13T15:27:57.359370830Z" level=info msg="StopPodSandbox for \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\"" Feb 13 15:27:57.359565 containerd[1504]: time="2025-02-13T15:27:57.359547952Z" level=info msg="TearDown network for sandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\" successfully" Feb 13 15:27:57.359735 containerd[1504]: time="2025-02-13T15:27:57.359718362Z" level=info msg="StopPodSandbox for \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\" returns successfully" Feb 13 15:27:57.361150 systemd[1]: run-netns-cni\x2d726e378f\x2d18f2\x2dcfa3\x2dfc53\x2d679115fe6710.mount: Deactivated successfully. Feb 13 15:27:57.361694 kubelet[2675]: I0213 15:27:57.361673 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7" Feb 13 15:27:57.362581 containerd[1504]: time="2025-02-13T15:27:57.362543471Z" level=info msg="StopPodSandbox for \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\"" Feb 13 15:27:57.362626 containerd[1504]: time="2025-02-13T15:27:57.362587545Z" level=info msg="StopPodSandbox for \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\"" Feb 13 15:27:57.362729 containerd[1504]: time="2025-02-13T15:27:57.362705336Z" level=info msg="TearDown network for sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\" successfully" Feb 13 15:27:57.362729 containerd[1504]: time="2025-02-13T15:27:57.362720034Z" level=info msg="StopPodSandbox for \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\" returns successfully" Feb 13 15:27:57.362807 containerd[1504]: time="2025-02-13T15:27:57.362739039Z" level=info msg="Ensure that sandbox f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7 in task-service has been cleanup successfully" Feb 13 15:27:57.364939 containerd[1504]: time="2025-02-13T15:27:57.364740951Z" level=info msg="TearDown network for sandbox \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\" successfully" Feb 13 15:27:57.364939 containerd[1504]: time="2025-02-13T15:27:57.364761440Z" level=info msg="StopPodSandbox for \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\" returns successfully" Feb 13 15:27:57.365388 containerd[1504]: time="2025-02-13T15:27:57.365343214Z" level=info msg="StopPodSandbox for \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\"" Feb 13 15:27:57.365460 containerd[1504]: time="2025-02-13T15:27:57.365443772Z" level=info msg="TearDown network for sandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\" successfully" Feb 13 15:27:57.365460 containerd[1504]: time="2025-02-13T15:27:57.365455354Z" level=info msg="StopPodSandbox for \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\" returns successfully" Feb 13 15:27:57.366063 containerd[1504]: time="2025-02-13T15:27:57.365842723Z" level=info msg="StopPodSandbox for \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\"" Feb 13 15:27:57.366063 containerd[1504]: time="2025-02-13T15:27:57.365938031Z" level=info msg="TearDown network for sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\" successfully" Feb 13 15:27:57.366063 containerd[1504]: time="2025-02-13T15:27:57.365950475Z" level=info msg="StopPodSandbox for \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\" returns successfully" Feb 13 15:27:57.365875 systemd[1]: run-netns-cni\x2d614fb8e4\x2d945e\x2dc324\x2de3ff\x2d4bfe4524d4f1.mount: Deactivated successfully. Feb 13 15:27:57.367050 containerd[1504]: time="2025-02-13T15:27:57.366865845Z" level=info msg="StopPodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\"" Feb 13 15:27:57.367050 containerd[1504]: time="2025-02-13T15:27:57.366960484Z" level=info msg="TearDown network for sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" successfully" Feb 13 15:27:57.367050 containerd[1504]: time="2025-02-13T15:27:57.366973659Z" level=info msg="StopPodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" returns successfully" Feb 13 15:27:57.367573 kubelet[2675]: I0213 15:27:57.367412 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044" Feb 13 15:27:57.370958 containerd[1504]: time="2025-02-13T15:27:57.368033149Z" level=info msg="StopPodSandbox for \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\"" Feb 13 15:27:57.370958 containerd[1504]: time="2025-02-13T15:27:57.368230671Z" level=info msg="Ensure that sandbox 525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044 in task-service has been cleanup successfully" Feb 13 15:27:57.370958 containerd[1504]: time="2025-02-13T15:27:57.369075087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fxmwv,Uid:dbe19860-015e-4dde-822b-2e8f3262322d,Namespace:kube-system,Attempt:4,}" Feb 13 15:27:57.371073 kubelet[2675]: E0213 15:27:57.368737 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:57.371264 containerd[1504]: time="2025-02-13T15:27:57.371213006Z" level=info msg="TearDown network for sandbox \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\" successfully" Feb 13 15:27:57.371264 containerd[1504]: time="2025-02-13T15:27:57.371250787Z" level=info msg="StopPodSandbox for \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\" returns successfully" Feb 13 15:27:57.371761 containerd[1504]: time="2025-02-13T15:27:57.371733223Z" level=info msg="StopPodSandbox for \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\"" Feb 13 15:27:57.372172 containerd[1504]: time="2025-02-13T15:27:57.372082449Z" level=info msg="TearDown network for sandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\" successfully" Feb 13 15:27:57.372172 containerd[1504]: time="2025-02-13T15:27:57.372094772Z" level=info msg="StopPodSandbox for \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\" returns successfully" Feb 13 15:27:57.371946 systemd[1]: run-netns-cni\x2d22077663\x2da96c\x2d857c\x2d0951\x2d214f3298a792.mount: Deactivated successfully. Feb 13 15:27:57.372689 containerd[1504]: time="2025-02-13T15:27:57.372669313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9pw2x,Uid:782fda46-0c98-43d2-919a-69ce574b5e7e,Namespace:calico-system,Attempt:2,}" Feb 13 15:27:57.676958 containerd[1504]: time="2025-02-13T15:27:57.676900393Z" level=info msg="StopPodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\"" Feb 13 15:27:57.677513 containerd[1504]: time="2025-02-13T15:27:57.677030768Z" level=info msg="TearDown network for sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" successfully" Feb 13 15:27:57.677513 containerd[1504]: time="2025-02-13T15:27:57.677041819Z" level=info msg="StopPodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" returns successfully" Feb 13 15:27:57.677751 containerd[1504]: time="2025-02-13T15:27:57.677678846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-kw8hn,Uid:8ed65fe4-b6cb-4506-afa3-9bfead75ba87,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:27:57.678619 containerd[1504]: time="2025-02-13T15:27:57.678580921Z" level=info msg="Ensure that sandbox 6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f in task-service has been cleanup successfully" Feb 13 15:27:57.678824 containerd[1504]: time="2025-02-13T15:27:57.678784564Z" level=info msg="TearDown network for sandbox \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\" successfully" Feb 13 15:27:57.678824 containerd[1504]: time="2025-02-13T15:27:57.678800484Z" level=info msg="StopPodSandbox for \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\" returns successfully" Feb 13 15:27:57.679154 containerd[1504]: time="2025-02-13T15:27:57.679114585Z" level=info msg="StopPodSandbox for \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\"" Feb 13 15:27:57.679244 containerd[1504]: time="2025-02-13T15:27:57.679219982Z" level=info msg="TearDown network for sandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\" successfully" Feb 13 15:27:57.679244 containerd[1504]: time="2025-02-13T15:27:57.679233568Z" level=info msg="StopPodSandbox for \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\" returns successfully" Feb 13 15:27:57.679547 containerd[1504]: time="2025-02-13T15:27:57.679508054Z" level=info msg="StopPodSandbox for \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\"" Feb 13 15:27:57.679749 containerd[1504]: time="2025-02-13T15:27:57.679724541Z" level=info msg="TearDown network for sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\" successfully" Feb 13 15:27:57.679749 containerd[1504]: time="2025-02-13T15:27:57.679742444Z" level=info msg="StopPodSandbox for \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\" returns successfully" Feb 13 15:27:57.679975 containerd[1504]: time="2025-02-13T15:27:57.679947470Z" level=info msg="StopPodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\"" Feb 13 15:27:57.680109 containerd[1504]: time="2025-02-13T15:27:57.680028341Z" level=info msg="TearDown network for sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" successfully" Feb 13 15:27:57.680109 containerd[1504]: time="2025-02-13T15:27:57.680046435Z" level=info msg="StopPodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" returns successfully" Feb 13 15:27:57.680410 containerd[1504]: time="2025-02-13T15:27:57.680390022Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\"" Feb 13 15:27:57.680593 containerd[1504]: time="2025-02-13T15:27:57.680550684Z" level=info msg="TearDown network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" successfully" Feb 13 15:27:57.680593 containerd[1504]: time="2025-02-13T15:27:57.680565602Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" returns successfully" Feb 13 15:27:57.680726 kubelet[2675]: E0213 15:27:57.680701 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:57.684760 containerd[1504]: time="2025-02-13T15:27:57.681233937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:5,}" Feb 13 15:27:58.345870 systemd[1]: run-netns-cni\x2d9b39983e\x2d73e2\x2dabae\x2d6ad9\x2db954c11e8475.mount: Deactivated successfully. Feb 13 15:27:58.388475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408129685.mount: Deactivated successfully. Feb 13 15:27:58.544153 containerd[1504]: time="2025-02-13T15:27:58.544081108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:58.554770 containerd[1504]: time="2025-02-13T15:27:58.554192951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 15:27:58.554770 containerd[1504]: time="2025-02-13T15:27:58.554333465Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:58.564741 containerd[1504]: time="2025-02-13T15:27:58.564680869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:58.569761 containerd[1504]: time="2025-02-13T15:27:58.569702045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.284952384s" Feb 13 15:27:58.569966 containerd[1504]: time="2025-02-13T15:27:58.569942727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 15:27:58.585232 containerd[1504]: time="2025-02-13T15:27:58.585180281Z" level=info msg="CreateContainer within sandbox \"07a58b272d47a4398d6e05e16a0fef89d512db8fdae0878efb99abb822d55128\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:27:58.605425 containerd[1504]: time="2025-02-13T15:27:58.605265545Z" level=info msg="CreateContainer within sandbox \"07a58b272d47a4398d6e05e16a0fef89d512db8fdae0878efb99abb822d55128\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a7e88e21b733bdf34968c72acb98f84de54ea99e0ae7c32044ee6f1d8e601967\"" Feb 13 15:27:58.606542 containerd[1504]: time="2025-02-13T15:27:58.606509282Z" level=info msg="StartContainer for \"a7e88e21b733bdf34968c72acb98f84de54ea99e0ae7c32044ee6f1d8e601967\"" Feb 13 15:27:58.663798 containerd[1504]: time="2025-02-13T15:27:58.663742123Z" level=error msg="Failed to destroy network for sandbox \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.664446 containerd[1504]: time="2025-02-13T15:27:58.664423062Z" level=error msg="encountered an error cleaning up failed sandbox \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.664567 containerd[1504]: time="2025-02-13T15:27:58.664548157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb78c4b87-jxmw6,Uid:b8166788-4099-451f-b170-059d6e53e935,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.665255 containerd[1504]: time="2025-02-13T15:27:58.664968147Z" level=error msg="Failed to destroy network for sandbox \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.665319 kubelet[2675]: E0213 15:27:58.665153 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.665319 kubelet[2675]: E0213 15:27:58.665209 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:58.665904 kubelet[2675]: E0213 15:27:58.665413 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" Feb 13 15:27:58.667435 kubelet[2675]: E0213 15:27:58.667129 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cb78c4b87-jxmw6_calico-system(b8166788-4099-451f-b170-059d6e53e935)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cb78c4b87-jxmw6_calico-system(b8166788-4099-451f-b170-059d6e53e935)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" podUID="b8166788-4099-451f-b170-059d6e53e935" Feb 13 15:27:58.670376 containerd[1504]: time="2025-02-13T15:27:58.668257357Z" level=error msg="encountered an error cleaning up failed sandbox \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.670376 containerd[1504]: time="2025-02-13T15:27:58.668470849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fxmwv,Uid:dbe19860-015e-4dde-822b-2e8f3262322d,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.671923 kubelet[2675]: E0213 15:27:58.671866 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.672023 kubelet[2675]: E0213 15:27:58.671995 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fxmwv" Feb 13 15:27:58.672069 kubelet[2675]: E0213 15:27:58.672041 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fxmwv" Feb 13 15:27:58.672127 kubelet[2675]: E0213 15:27:58.672099 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-fxmwv_kube-system(dbe19860-015e-4dde-822b-2e8f3262322d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-fxmwv_kube-system(dbe19860-015e-4dde-822b-2e8f3262322d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-fxmwv" podUID="dbe19860-015e-4dde-822b-2e8f3262322d" Feb 13 15:27:58.680328 containerd[1504]: time="2025-02-13T15:27:58.680262277Z" level=error msg="Failed to destroy network for sandbox \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.680828 containerd[1504]: time="2025-02-13T15:27:58.680674592Z" level=error msg="encountered an error cleaning up failed sandbox \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.680828 containerd[1504]: time="2025-02-13T15:27:58.680732100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.681003 kubelet[2675]: E0213 15:27:58.680966 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.681062 kubelet[2675]: E0213 15:27:58.681029 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:58.681062 kubelet[2675]: E0213 15:27:58.681052 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xr95c" Feb 13 15:27:58.681135 kubelet[2675]: E0213 15:27:58.681098 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xr95c_kube-system(18b79007-649b-4cb6-ba9e-fdcdc956535a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xr95c_kube-system(18b79007-649b-4cb6-ba9e-fdcdc956535a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xr95c" podUID="18b79007-649b-4cb6-ba9e-fdcdc956535a" Feb 13 15:27:58.681597 containerd[1504]: time="2025-02-13T15:27:58.681545749Z" level=error msg="Failed to destroy network for sandbox \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.682161 containerd[1504]: time="2025-02-13T15:27:58.682124797Z" level=error msg="encountered an error cleaning up failed sandbox \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.682217 containerd[1504]: time="2025-02-13T15:27:58.682196733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-kw8hn,Uid:8ed65fe4-b6cb-4506-afa3-9bfead75ba87,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.682561 kubelet[2675]: E0213 15:27:58.682533 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.682607 kubelet[2675]: E0213 15:27:58.682581 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:58.682642 kubelet[2675]: E0213 15:27:58.682612 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" Feb 13 15:27:58.682669 kubelet[2675]: E0213 15:27:58.682661 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75c78c8d9f-kw8hn_calico-apiserver(8ed65fe4-b6cb-4506-afa3-9bfead75ba87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75c78c8d9f-kw8hn_calico-apiserver(8ed65fe4-b6cb-4506-afa3-9bfead75ba87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" podUID="8ed65fe4-b6cb-4506-afa3-9bfead75ba87" Feb 13 15:27:58.685189 containerd[1504]: time="2025-02-13T15:27:58.685148018Z" level=error msg="Failed to destroy network for sandbox \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.685703 containerd[1504]: time="2025-02-13T15:27:58.685654109Z" level=error msg="encountered an error cleaning up failed sandbox \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.685828 containerd[1504]: time="2025-02-13T15:27:58.685742094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9pw2x,Uid:782fda46-0c98-43d2-919a-69ce574b5e7e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.686139 kubelet[2675]: E0213 15:27:58.686085 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.686291 kubelet[2675]: E0213 15:27:58.686168 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9pw2x" Feb 13 15:27:58.686291 kubelet[2675]: E0213 15:27:58.686196 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9pw2x" Feb 13 15:27:58.686291 kubelet[2675]: E0213 15:27:58.686267 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9pw2x_calico-system(782fda46-0c98-43d2-919a-69ce574b5e7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9pw2x_calico-system(782fda46-0c98-43d2-919a-69ce574b5e7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9pw2x" podUID="782fda46-0c98-43d2-919a-69ce574b5e7e" Feb 13 15:27:58.688214 containerd[1504]: time="2025-02-13T15:27:58.688171018Z" level=error msg="Failed to destroy network for sandbox \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.688705 containerd[1504]: time="2025-02-13T15:27:58.688667512Z" level=error msg="encountered an error cleaning up failed sandbox \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.688780 containerd[1504]: time="2025-02-13T15:27:58.688736511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-dzkbh,Uid:baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.689043 kubelet[2675]: E0213 15:27:58.689011 2675 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:58.689092 kubelet[2675]: E0213 15:27:58.689077 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:58.689134 kubelet[2675]: E0213 15:27:58.689111 2675 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" Feb 13 15:27:58.689209 kubelet[2675]: E0213 15:27:58.689193 2675 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75c78c8d9f-dzkbh_calico-apiserver(baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75c78c8d9f-dzkbh_calico-apiserver(baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" podUID="baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54" Feb 13 15:27:58.723519 systemd[1]: Started cri-containerd-a7e88e21b733bdf34968c72acb98f84de54ea99e0ae7c32044ee6f1d8e601967.scope - libcontainer container a7e88e21b733bdf34968c72acb98f84de54ea99e0ae7c32044ee6f1d8e601967. Feb 13 15:27:58.814387 containerd[1504]: time="2025-02-13T15:27:58.814321848Z" level=info msg="StartContainer for \"a7e88e21b733bdf34968c72acb98f84de54ea99e0ae7c32044ee6f1d8e601967\" returns successfully" Feb 13 15:27:58.845688 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:27:58.845826 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:27:59.264978 systemd[1]: Started sshd@8-10.0.0.70:22-10.0.0.1:34252.service - OpenSSH per-connection server daemon (10.0.0.1:34252). Feb 13 15:27:59.311359 sshd[4587]: Accepted publickey for core from 10.0.0.1 port 34252 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:27:59.313408 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:59.317835 systemd-logind[1483]: New session 9 of user core. Feb 13 15:27:59.324519 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:27:59.349828 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb-shm.mount: Deactivated successfully. Feb 13 15:27:59.349955 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b-shm.mount: Deactivated successfully. Feb 13 15:27:59.375587 kubelet[2675]: E0213 15:27:59.375516 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:59.379201 kubelet[2675]: I0213 15:27:59.378789 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b" Feb 13 15:27:59.379514 containerd[1504]: time="2025-02-13T15:27:59.379474968Z" level=info msg="StopPodSandbox for \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\"" Feb 13 15:27:59.383972 containerd[1504]: time="2025-02-13T15:27:59.383924539Z" level=info msg="Ensure that sandbox 9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b in task-service has been cleanup successfully" Feb 13 15:27:59.386393 kubelet[2675]: I0213 15:27:59.385598 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d" Feb 13 15:27:59.386468 containerd[1504]: time="2025-02-13T15:27:59.386137046Z" level=info msg="StopPodSandbox for \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\"" Feb 13 15:27:59.386468 containerd[1504]: time="2025-02-13T15:27:59.386367018Z" level=info msg="Ensure that sandbox 95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d in task-service has been cleanup successfully" Feb 13 15:27:59.390754 containerd[1504]: time="2025-02-13T15:27:59.390713516Z" level=info msg="TearDown network for sandbox \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\" successfully" Feb 13 15:27:59.390754 containerd[1504]: time="2025-02-13T15:27:59.390740276Z" level=info msg="StopPodSandbox for \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\" returns successfully" Feb 13 15:27:59.391252 systemd[1]: run-netns-cni\x2d2b749f14\x2d0bc2\x2dba61\x2d38a3\x2d15d1bdefbe31.mount: Deactivated successfully. Feb 13 15:27:59.391740 systemd[1]: run-netns-cni\x2d3f90d488\x2d4054\x2dbbb7\x2dbc54\x2dcbf7622b1134.mount: Deactivated successfully. Feb 13 15:27:59.392530 containerd[1504]: time="2025-02-13T15:27:59.392474474Z" level=info msg="TearDown network for sandbox \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\" successfully" Feb 13 15:27:59.392530 containerd[1504]: time="2025-02-13T15:27:59.392512195Z" level=info msg="StopPodSandbox for \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\" returns successfully" Feb 13 15:27:59.393448 containerd[1504]: time="2025-02-13T15:27:59.393416844Z" level=info msg="StopPodSandbox for \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\"" Feb 13 15:27:59.394639 containerd[1504]: time="2025-02-13T15:27:59.393542691Z" level=info msg="TearDown network for sandbox \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\" successfully" Feb 13 15:27:59.394639 containerd[1504]: time="2025-02-13T15:27:59.393565684Z" level=info msg="StopPodSandbox for \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\" returns successfully" Feb 13 15:27:59.394639 containerd[1504]: time="2025-02-13T15:27:59.394208672Z" level=info msg="StopPodSandbox for \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\"" Feb 13 15:27:59.394639 containerd[1504]: time="2025-02-13T15:27:59.394310263Z" level=info msg="TearDown network for sandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\" successfully" Feb 13 15:27:59.394639 containerd[1504]: time="2025-02-13T15:27:59.394323849Z" level=info msg="StopPodSandbox for \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\" returns successfully" Feb 13 15:27:59.394866 containerd[1504]: time="2025-02-13T15:27:59.394643169Z" level=info msg="StopPodSandbox for \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\"" Feb 13 15:27:59.394866 containerd[1504]: time="2025-02-13T15:27:59.394708481Z" level=info msg="StopPodSandbox for \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\"" Feb 13 15:27:59.394866 containerd[1504]: time="2025-02-13T15:27:59.394747705Z" level=info msg="TearDown network for sandbox \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\" successfully" Feb 13 15:27:59.394866 containerd[1504]: time="2025-02-13T15:27:59.394763685Z" level=info msg="StopPodSandbox for \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\" returns successfully" Feb 13 15:27:59.394866 containerd[1504]: time="2025-02-13T15:27:59.394801336Z" level=info msg="TearDown network for sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\" successfully" Feb 13 15:27:59.394866 containerd[1504]: time="2025-02-13T15:27:59.394817276Z" level=info msg="StopPodSandbox for \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\" returns successfully" Feb 13 15:27:59.397880 containerd[1504]: time="2025-02-13T15:27:59.397365303Z" level=info msg="StopPodSandbox for \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\"" Feb 13 15:27:59.397880 containerd[1504]: time="2025-02-13T15:27:59.397478456Z" level=info msg="TearDown network for sandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\" successfully" Feb 13 15:27:59.397880 containerd[1504]: time="2025-02-13T15:27:59.397491050Z" level=info msg="StopPodSandbox for \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\" returns successfully" Feb 13 15:27:59.397880 containerd[1504]: time="2025-02-13T15:27:59.397544650Z" level=info msg="StopPodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\"" Feb 13 15:27:59.397880 containerd[1504]: time="2025-02-13T15:27:59.397642464Z" level=info msg="TearDown network for sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" successfully" Feb 13 15:27:59.397880 containerd[1504]: time="2025-02-13T15:27:59.397655929Z" level=info msg="StopPodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" returns successfully" Feb 13 15:27:59.400181 containerd[1504]: time="2025-02-13T15:27:59.399404505Z" level=info msg="StopPodSandbox for \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\"" Feb 13 15:27:59.400181 containerd[1504]: time="2025-02-13T15:27:59.399512147Z" level=info msg="TearDown network for sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\" successfully" Feb 13 15:27:59.400181 containerd[1504]: time="2025-02-13T15:27:59.399526193Z" level=info msg="StopPodSandbox for \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\" returns successfully" Feb 13 15:27:59.400799 containerd[1504]: time="2025-02-13T15:27:59.400756866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-dzkbh,Uid:baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:27:59.404307 containerd[1504]: time="2025-02-13T15:27:59.404260679Z" level=info msg="StopPodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\"" Feb 13 15:27:59.404930 kubelet[2675]: I0213 15:27:59.404680 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919" Feb 13 15:27:59.407110 containerd[1504]: time="2025-02-13T15:27:59.406175567Z" level=info msg="TearDown network for sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" successfully" Feb 13 15:27:59.407180 containerd[1504]: time="2025-02-13T15:27:59.407115814Z" level=info msg="StopPodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" returns successfully" Feb 13 15:27:59.407393 containerd[1504]: time="2025-02-13T15:27:59.407342279Z" level=info msg="StopPodSandbox for \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\"" Feb 13 15:27:59.408372 containerd[1504]: time="2025-02-13T15:27:59.407664605Z" level=info msg="Ensure that sandbox 33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919 in task-service has been cleanup successfully" Feb 13 15:27:59.408372 containerd[1504]: time="2025-02-13T15:27:59.408039259Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\"" Feb 13 15:27:59.408372 containerd[1504]: time="2025-02-13T15:27:59.408142894Z" level=info msg="TearDown network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" successfully" Feb 13 15:27:59.408372 containerd[1504]: time="2025-02-13T15:27:59.408158513Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" returns successfully" Feb 13 15:27:59.408532 kubelet[2675]: E0213 15:27:59.408377 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:59.408653 containerd[1504]: time="2025-02-13T15:27:59.408628557Z" level=info msg="TearDown network for sandbox \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\" successfully" Feb 13 15:27:59.408799 containerd[1504]: time="2025-02-13T15:27:59.408736429Z" level=info msg="StopPodSandbox for \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\" returns successfully" Feb 13 15:27:59.409253 containerd[1504]: time="2025-02-13T15:27:59.408919002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:6,}" Feb 13 15:27:59.410051 containerd[1504]: time="2025-02-13T15:27:59.410006977Z" level=info msg="StopPodSandbox for \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\"" Feb 13 15:27:59.410170 containerd[1504]: time="2025-02-13T15:27:59.410143172Z" level=info msg="TearDown network for sandbox \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\" successfully" Feb 13 15:27:59.410170 containerd[1504]: time="2025-02-13T15:27:59.410162328Z" level=info msg="StopPodSandbox for \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\" returns successfully" Feb 13 15:27:59.410606 systemd[1]: run-netns-cni\x2d4398ec72\x2dd4d5\x2d487a\x2d9571\x2de1c42dd6b885.mount: Deactivated successfully. Feb 13 15:27:59.413526 containerd[1504]: time="2025-02-13T15:27:59.412316295Z" level=info msg="StopPodSandbox for \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\"" Feb 13 15:27:59.413768 containerd[1504]: time="2025-02-13T15:27:59.413683394Z" level=info msg="TearDown network for sandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\" successfully" Feb 13 15:27:59.413768 containerd[1504]: time="2025-02-13T15:27:59.413706067Z" level=info msg="StopPodSandbox for \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\" returns successfully" Feb 13 15:27:59.414243 containerd[1504]: time="2025-02-13T15:27:59.414212468Z" level=info msg="StopPodSandbox for \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\"" Feb 13 15:27:59.414355 containerd[1504]: time="2025-02-13T15:27:59.414305994Z" level=info msg="TearDown network for sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\" successfully" Feb 13 15:27:59.414355 containerd[1504]: time="2025-02-13T15:27:59.414320060Z" level=info msg="StopPodSandbox for \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\" returns successfully" Feb 13 15:27:59.414636 kubelet[2675]: I0213 15:27:59.414609 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c" Feb 13 15:27:59.415246 containerd[1504]: time="2025-02-13T15:27:59.415214340Z" level=info msg="StopPodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\"" Feb 13 15:27:59.415335 containerd[1504]: time="2025-02-13T15:27:59.415312986Z" level=info msg="TearDown network for sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" successfully" Feb 13 15:27:59.415335 containerd[1504]: time="2025-02-13T15:27:59.415330239Z" level=info msg="StopPodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" returns successfully" Feb 13 15:27:59.415519 containerd[1504]: time="2025-02-13T15:27:59.415493324Z" level=info msg="StopPodSandbox for \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\"" Feb 13 15:27:59.415730 containerd[1504]: time="2025-02-13T15:27:59.415704171Z" level=info msg="Ensure that sandbox 769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c in task-service has been cleanup successfully" Feb 13 15:27:59.416076 containerd[1504]: time="2025-02-13T15:27:59.416052896Z" level=info msg="TearDown network for sandbox \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\" successfully" Feb 13 15:27:59.416242 containerd[1504]: time="2025-02-13T15:27:59.416194202Z" level=info msg="StopPodSandbox for \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\" returns successfully" Feb 13 15:27:59.416650 containerd[1504]: time="2025-02-13T15:27:59.416622897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-kw8hn,Uid:8ed65fe4-b6cb-4506-afa3-9bfead75ba87,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:27:59.418413 containerd[1504]: time="2025-02-13T15:27:59.418242420Z" level=info msg="StopPodSandbox for \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\"" Feb 13 15:27:59.418413 containerd[1504]: time="2025-02-13T15:27:59.418321578Z" level=info msg="TearDown network for sandbox \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\" successfully" Feb 13 15:27:59.418413 containerd[1504]: time="2025-02-13T15:27:59.418331097Z" level=info msg="StopPodSandbox for \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\" returns successfully" Feb 13 15:27:59.421910 systemd[1]: run-netns-cni\x2d8b6a7ade\x2de843\x2df149\x2d960b\x2d260952a3575f.mount: Deactivated successfully. Feb 13 15:27:59.424450 containerd[1504]: time="2025-02-13T15:27:59.424242023Z" level=info msg="StopPodSandbox for \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\"" Feb 13 15:27:59.424788 containerd[1504]: time="2025-02-13T15:27:59.424636835Z" level=info msg="TearDown network for sandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\" successfully" Feb 13 15:27:59.424913 containerd[1504]: time="2025-02-13T15:27:59.424655169Z" level=info msg="StopPodSandbox for \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\" returns successfully" Feb 13 15:27:59.425484 containerd[1504]: time="2025-02-13T15:27:59.425429013Z" level=info msg="StopPodSandbox for \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\"" Feb 13 15:27:59.428381 containerd[1504]: time="2025-02-13T15:27:59.425637515Z" level=info msg="TearDown network for sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\" successfully" Feb 13 15:27:59.428381 containerd[1504]: time="2025-02-13T15:27:59.425665488Z" level=info msg="StopPodSandbox for \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\" returns successfully" Feb 13 15:27:59.429127 kubelet[2675]: I0213 15:27:59.428619 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c" Feb 13 15:27:59.430073 containerd[1504]: time="2025-02-13T15:27:59.430035077Z" level=info msg="StopPodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\"" Feb 13 15:27:59.430492 containerd[1504]: time="2025-02-13T15:27:59.430383392Z" level=info msg="TearDown network for sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" successfully" Feb 13 15:27:59.430560 containerd[1504]: time="2025-02-13T15:27:59.430534155Z" level=info msg="StopPodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" returns successfully" Feb 13 15:27:59.430590 containerd[1504]: time="2025-02-13T15:27:59.430573870Z" level=info msg="StopPodSandbox for \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\"" Feb 13 15:27:59.431637 kubelet[2675]: E0213 15:27:59.431604 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:59.431701 containerd[1504]: time="2025-02-13T15:27:59.431681010Z" level=info msg="Ensure that sandbox e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c in task-service has been cleanup successfully" Feb 13 15:27:59.432718 containerd[1504]: time="2025-02-13T15:27:59.432499839Z" level=info msg="TearDown network for sandbox \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\" successfully" Feb 13 15:27:59.432718 containerd[1504]: time="2025-02-13T15:27:59.432518604Z" level=info msg="StopPodSandbox for \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\" returns successfully" Feb 13 15:27:59.432718 containerd[1504]: time="2025-02-13T15:27:59.432526669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fxmwv,Uid:dbe19860-015e-4dde-822b-2e8f3262322d,Namespace:kube-system,Attempt:5,}" Feb 13 15:27:59.433192 containerd[1504]: time="2025-02-13T15:27:59.433149931Z" level=info msg="StopPodSandbox for \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\"" Feb 13 15:27:59.433806 containerd[1504]: time="2025-02-13T15:27:59.433758093Z" level=info msg="TearDown network for sandbox \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\" successfully" Feb 13 15:27:59.433971 containerd[1504]: time="2025-02-13T15:27:59.433859183Z" level=info msg="StopPodSandbox for \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\" returns successfully" Feb 13 15:27:59.443175 kubelet[2675]: I0213 15:27:59.443129 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb" Feb 13 15:27:59.446992 containerd[1504]: time="2025-02-13T15:27:59.442397717Z" level=info msg="StopPodSandbox for \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\"" Feb 13 15:27:59.446992 containerd[1504]: time="2025-02-13T15:27:59.444287818Z" level=info msg="StopPodSandbox for \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\"" Feb 13 15:27:59.446992 containerd[1504]: time="2025-02-13T15:27:59.446290661Z" level=info msg="Ensure that sandbox dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb in task-service has been cleanup successfully" Feb 13 15:27:59.446992 containerd[1504]: time="2025-02-13T15:27:59.446586737Z" level=info msg="TearDown network for sandbox \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\" successfully" Feb 13 15:27:59.446992 containerd[1504]: time="2025-02-13T15:27:59.446630630Z" level=info msg="StopPodSandbox for \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\" returns successfully" Feb 13 15:27:59.446992 containerd[1504]: time="2025-02-13T15:27:59.446603138Z" level=info msg="TearDown network for sandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\" successfully" Feb 13 15:27:59.446992 containerd[1504]: time="2025-02-13T15:27:59.446718755Z" level=info msg="StopPodSandbox for \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\" returns successfully" Feb 13 15:27:59.447675 containerd[1504]: time="2025-02-13T15:27:59.447644555Z" level=info msg="StopPodSandbox for \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\"" Feb 13 15:27:59.447796 containerd[1504]: time="2025-02-13T15:27:59.447743791Z" level=info msg="TearDown network for sandbox \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\" successfully" Feb 13 15:27:59.447796 containerd[1504]: time="2025-02-13T15:27:59.447791120Z" level=info msg="StopPodSandbox for \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\" returns successfully" Feb 13 15:27:59.448652 containerd[1504]: time="2025-02-13T15:27:59.448620859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9pw2x,Uid:782fda46-0c98-43d2-919a-69ce574b5e7e,Namespace:calico-system,Attempt:3,}" Feb 13 15:27:59.449811 containerd[1504]: time="2025-02-13T15:27:59.448856392Z" level=info msg="StopPodSandbox for \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\"" Feb 13 15:27:59.449811 containerd[1504]: time="2025-02-13T15:27:59.448941511Z" level=info msg="TearDown network for sandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\" successfully" Feb 13 15:27:59.449811 containerd[1504]: time="2025-02-13T15:27:59.448951019Z" level=info msg="StopPodSandbox for \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\" returns successfully" Feb 13 15:27:59.449811 containerd[1504]: time="2025-02-13T15:27:59.449759228Z" level=info msg="StopPodSandbox for \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\"" Feb 13 15:27:59.449925 containerd[1504]: time="2025-02-13T15:27:59.449907046Z" level=info msg="TearDown network for sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\" successfully" Feb 13 15:27:59.449925 containerd[1504]: time="2025-02-13T15:27:59.449919239Z" level=info msg="StopPodSandbox for \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\" returns successfully" Feb 13 15:27:59.450794 containerd[1504]: time="2025-02-13T15:27:59.450706168Z" level=info msg="StopPodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\"" Feb 13 15:27:59.451015 containerd[1504]: time="2025-02-13T15:27:59.450947351Z" level=info msg="TearDown network for sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" successfully" Feb 13 15:27:59.451660 containerd[1504]: time="2025-02-13T15:27:59.451641685Z" level=info msg="StopPodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" returns successfully" Feb 13 15:27:59.452555 containerd[1504]: time="2025-02-13T15:27:59.452453120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb78c4b87-jxmw6,Uid:b8166788-4099-451f-b170-059d6e53e935,Namespace:calico-system,Attempt:5,}" Feb 13 15:27:59.508797 sshd[4595]: Connection closed by 10.0.0.1 port 34252 Feb 13 15:27:59.508591 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:59.517467 systemd[1]: sshd@8-10.0.0.70:22-10.0.0.1:34252.service: Deactivated successfully. Feb 13 15:27:59.522475 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:27:59.529960 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:27:59.531568 systemd-logind[1483]: Removed session 9. Feb 13 15:27:59.718068 systemd-networkd[1418]: cali95bbe0b80f1: Link UP Feb 13 15:27:59.719427 systemd-networkd[1418]: cali95bbe0b80f1: Gained carrier Feb 13 15:27:59.731734 kubelet[2675]: I0213 15:27:59.731683 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-26nw9" podStartSLOduration=2.893244398 podStartE2EDuration="19.73164326s" podCreationTimestamp="2025-02-13 15:27:40 +0000 UTC" firstStartedPulling="2025-02-13 15:27:41.732002748 +0000 UTC m=+18.622317706" lastFinishedPulling="2025-02-13 15:27:58.57040161 +0000 UTC m=+35.460716568" observedRunningTime="2025-02-13 15:27:59.401259841 +0000 UTC m=+36.291574829" watchObservedRunningTime="2025-02-13 15:27:59.73164326 +0000 UTC m=+36.621958218" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.592 [INFO][4675] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.604 [INFO][4675] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--fxmwv-eth0 coredns-76f75df574- kube-system dbe19860-015e-4dde-822b-2e8f3262322d 741 0 2025-02-13 15:27:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-fxmwv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95bbe0b80f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" Namespace="kube-system" Pod="coredns-76f75df574-fxmwv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fxmwv-" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.605 [INFO][4675] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" Namespace="kube-system" Pod="coredns-76f75df574-fxmwv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fxmwv-eth0" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.657 [INFO][4741] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" HandleID="k8s-pod-network.5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" Workload="localhost-k8s-coredns--76f75df574--fxmwv-eth0" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.675 [INFO][4741] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" HandleID="k8s-pod-network.5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" Workload="localhost-k8s-coredns--76f75df574--fxmwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265090), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-fxmwv", "timestamp":"2025-02-13 15:27:59.657422645 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.675 [INFO][4741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.676 [INFO][4741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.676 [INFO][4741] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.678 [INFO][4741] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" host="localhost" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.684 [INFO][4741] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.689 [INFO][4741] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.691 [INFO][4741] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.693 [INFO][4741] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.693 [INFO][4741] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" host="localhost" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.695 [INFO][4741] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9 Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.698 [INFO][4741] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" host="localhost" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.705 [INFO][4741] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" host="localhost" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.705 [INFO][4741] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" host="localhost" Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.705 [INFO][4741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:27:59.734287 containerd[1504]: 2025-02-13 15:27:59.705 [INFO][4741] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" HandleID="k8s-pod-network.5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" Workload="localhost-k8s-coredns--76f75df574--fxmwv-eth0" Feb 13 15:27:59.735858 containerd[1504]: 2025-02-13 15:27:59.708 [INFO][4675] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" Namespace="kube-system" Pod="coredns-76f75df574-fxmwv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fxmwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--fxmwv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"dbe19860-015e-4dde-822b-2e8f3262322d", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-fxmwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95bbe0b80f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:59.735858 containerd[1504]: 2025-02-13 15:27:59.709 [INFO][4675] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" Namespace="kube-system" Pod="coredns-76f75df574-fxmwv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fxmwv-eth0" Feb 13 15:27:59.735858 containerd[1504]: 2025-02-13 15:27:59.709 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95bbe0b80f1 ContainerID="5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" Namespace="kube-system" Pod="coredns-76f75df574-fxmwv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fxmwv-eth0" Feb 13 15:27:59.735858 containerd[1504]: 2025-02-13 15:27:59.719 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" Namespace="kube-system" Pod="coredns-76f75df574-fxmwv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fxmwv-eth0" Feb 13 15:27:59.735858 containerd[1504]: 2025-02-13 15:27:59.720 [INFO][4675] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" Namespace="kube-system" Pod="coredns-76f75df574-fxmwv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fxmwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--fxmwv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"dbe19860-015e-4dde-822b-2e8f3262322d", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9", Pod:"coredns-76f75df574-fxmwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95bbe0b80f1", MAC:"aa:1c:bc:cf:76:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:59.735858 containerd[1504]: 2025-02-13 15:27:59.730 [INFO][4675] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9" Namespace="kube-system" Pod="coredns-76f75df574-fxmwv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fxmwv-eth0" Feb 13 15:27:59.741809 systemd-networkd[1418]: cali85e7a5b10fd: Link UP Feb 13 15:27:59.742002 systemd-networkd[1418]: cali85e7a5b10fd: Gained carrier Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.572 [INFO][4658] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.592 [INFO][4658] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0 calico-apiserver-75c78c8d9f- calico-apiserver 8ed65fe4-b6cb-4506-afa3-9bfead75ba87 744 0 2025-02-13 15:27:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75c78c8d9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-75c78c8d9f-kw8hn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali85e7a5b10fd [] []}} ContainerID="b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-kw8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-" Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.592 [INFO][4658] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-kw8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0" Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.657 [INFO][4734] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" HandleID="k8s-pod-network.b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" Workload="localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0" Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.676 [INFO][4734] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" HandleID="k8s-pod-network.b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" Workload="localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003059a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-75c78c8d9f-kw8hn", "timestamp":"2025-02-13 15:27:59.657189306 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.676 [INFO][4734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.705 [INFO][4734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.705 [INFO][4734] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.707 [INFO][4734] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" host="localhost" Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.710 [INFO][4734] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.714 [INFO][4734] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.715 [INFO][4734] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.717 [INFO][4734] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.718 [INFO][4734] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" host="localhost" Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.720 [INFO][4734] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88 Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.723 [INFO][4734] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" host="localhost" Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.729 [INFO][4734] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" host="localhost" Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.730 [INFO][4734] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" host="localhost" Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.730 [INFO][4734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:27:59.757323 containerd[1504]: 2025-02-13 15:27:59.730 [INFO][4734] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" HandleID="k8s-pod-network.b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" Workload="localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0" Feb 13 15:27:59.758182 containerd[1504]: 2025-02-13 15:27:59.737 [INFO][4658] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-kw8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0", GenerateName:"calico-apiserver-75c78c8d9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ed65fe4-b6cb-4506-afa3-9bfead75ba87", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75c78c8d9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-75c78c8d9f-kw8hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85e7a5b10fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:59.758182 containerd[1504]: 2025-02-13 15:27:59.737 [INFO][4658] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-kw8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0" Feb 13 15:27:59.758182 containerd[1504]: 2025-02-13 15:27:59.737 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85e7a5b10fd ContainerID="b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-kw8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0" Feb 13 15:27:59.758182 containerd[1504]: 2025-02-13 15:27:59.740 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-kw8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0" Feb 13 15:27:59.758182 containerd[1504]: 2025-02-13 15:27:59.740 [INFO][4658] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-kw8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0", GenerateName:"calico-apiserver-75c78c8d9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ed65fe4-b6cb-4506-afa3-9bfead75ba87", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75c78c8d9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88", Pod:"calico-apiserver-75c78c8d9f-kw8hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85e7a5b10fd", MAC:"e6:9d:f1:71:b0:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:59.758182 containerd[1504]: 2025-02-13 15:27:59.754 [INFO][4658] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-kw8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--kw8hn-eth0" Feb 13 15:27:59.798565 systemd-networkd[1418]: cali765bd28a5af: Link UP Feb 13 15:27:59.798878 systemd-networkd[1418]: cali765bd28a5af: Gained carrier Feb 13 15:27:59.826010 containerd[1504]: time="2025-02-13T15:27:59.825620167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:59.826010 containerd[1504]: time="2025-02-13T15:27:59.825672936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:59.826010 containerd[1504]: time="2025-02-13T15:27:59.825685239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.509 [INFO][4620] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.547 [INFO][4620] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0 calico-apiserver-75c78c8d9f- calico-apiserver baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54 749 0 2025-02-13 15:27:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75c78c8d9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-75c78c8d9f-dzkbh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali765bd28a5af [] []}} ContainerID="6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-dzkbh" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-" Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.547 [INFO][4620] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-dzkbh" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0" Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.666 [INFO][4714] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" HandleID="k8s-pod-network.6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" Workload="localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0" Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.678 [INFO][4714] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" HandleID="k8s-pod-network.6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" Workload="localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000376380), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-75c78c8d9f-dzkbh", "timestamp":"2025-02-13 15:27:59.666906935 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.678 [INFO][4714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.730 [INFO][4714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.731 [INFO][4714] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.734 [INFO][4714] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" host="localhost" Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.739 [INFO][4714] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.746 [INFO][4714] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.748 [INFO][4714] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.750 [INFO][4714] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.750 [INFO][4714] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" host="localhost" Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.753 [INFO][4714] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.764 [INFO][4714] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" host="localhost" Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.774 [INFO][4714] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" host="localhost" Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.774 [INFO][4714] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" host="localhost" Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.774 [INFO][4714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:27:59.826010 containerd[1504]: 2025-02-13 15:27:59.774 [INFO][4714] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" HandleID="k8s-pod-network.6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" Workload="localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0" Feb 13 15:27:59.826855 containerd[1504]: 2025-02-13 15:27:59.786 [INFO][4620] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-dzkbh" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0", GenerateName:"calico-apiserver-75c78c8d9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75c78c8d9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-75c78c8d9f-dzkbh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali765bd28a5af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:59.826855 containerd[1504]: 2025-02-13 15:27:59.786 [INFO][4620] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-dzkbh" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0" Feb 13 15:27:59.826855 containerd[1504]: 2025-02-13 15:27:59.786 [INFO][4620] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali765bd28a5af ContainerID="6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-dzkbh" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0" Feb 13 15:27:59.826855 containerd[1504]: 2025-02-13 15:27:59.799 [INFO][4620] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-dzkbh" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0" Feb 13 15:27:59.826855 containerd[1504]: 2025-02-13 15:27:59.800 [INFO][4620] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-dzkbh" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0", GenerateName:"calico-apiserver-75c78c8d9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75c78c8d9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f", Pod:"calico-apiserver-75c78c8d9f-dzkbh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali765bd28a5af", MAC:"a2:c3:dc:38:8c:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:59.826855 containerd[1504]: 2025-02-13 15:27:59.822 [INFO][4620] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f" Namespace="calico-apiserver" Pod="calico-apiserver-75c78c8d9f-dzkbh" WorkloadEndpoint="localhost-k8s-calico--apiserver--75c78c8d9f--dzkbh-eth0" Feb 13 15:27:59.826855 containerd[1504]: time="2025-02-13T15:27:59.825761843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:59.844977 containerd[1504]: time="2025-02-13T15:27:59.842641469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:59.845466 containerd[1504]: time="2025-02-13T15:27:59.845178646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:59.845466 containerd[1504]: time="2025-02-13T15:27:59.845195668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:59.845735 containerd[1504]: time="2025-02-13T15:27:59.845636716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:59.848526 systemd[1]: Started cri-containerd-5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9.scope - libcontainer container 5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9. Feb 13 15:27:59.859377 containerd[1504]: time="2025-02-13T15:27:59.857520265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:59.859377 containerd[1504]: time="2025-02-13T15:27:59.857769543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:59.859377 containerd[1504]: time="2025-02-13T15:27:59.857785403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:59.859377 containerd[1504]: time="2025-02-13T15:27:59.857887094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:59.864105 systemd-networkd[1418]: calib4fcd1c6aca: Link UP Feb 13 15:27:59.864757 systemd-networkd[1418]: calib4fcd1c6aca: Gained carrier Feb 13 15:27:59.876542 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:27:59.883650 systemd[1]: Started cri-containerd-b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88.scope - libcontainer container b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88. Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.516 [INFO][4643] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.548 [INFO][4643] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--xr95c-eth0 coredns-76f75df574- kube-system 18b79007-649b-4cb6-ba9e-fdcdc956535a 748 0 2025-02-13 15:27:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-xr95c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib4fcd1c6aca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" Namespace="kube-system" Pod="coredns-76f75df574-xr95c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xr95c-" Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.548 [INFO][4643] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" Namespace="kube-system" Pod="coredns-76f75df574-xr95c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xr95c-eth0" Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.666 [INFO][4717] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" HandleID="k8s-pod-network.432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" Workload="localhost-k8s-coredns--76f75df574--xr95c-eth0" Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.678 [INFO][4717] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" HandleID="k8s-pod-network.432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" Workload="localhost-k8s-coredns--76f75df574--xr95c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000406270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-xr95c", "timestamp":"2025-02-13 15:27:59.666279545 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.678 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.776 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.776 [INFO][4717] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.786 [INFO][4717] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" host="localhost" Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.802 [INFO][4717] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.816 [INFO][4717] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.827 [INFO][4717] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.832 [INFO][4717] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.833 [INFO][4717] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" host="localhost" Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.835 [INFO][4717] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99 Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.840 [INFO][4717] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" host="localhost" Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.848 [INFO][4717] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" host="localhost" Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.849 [INFO][4717] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" host="localhost" Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.849 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:27:59.885760 containerd[1504]: 2025-02-13 15:27:59.849 [INFO][4717] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" HandleID="k8s-pod-network.432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" Workload="localhost-k8s-coredns--76f75df574--xr95c-eth0" Feb 13 15:27:59.886389 containerd[1504]: 2025-02-13 15:27:59.855 [INFO][4643] cni-plugin/k8s.go 386: Populated endpoint ContainerID="432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" Namespace="kube-system" Pod="coredns-76f75df574-xr95c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xr95c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--xr95c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"18b79007-649b-4cb6-ba9e-fdcdc956535a", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-xr95c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4fcd1c6aca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:59.886389 containerd[1504]: 2025-02-13 15:27:59.856 [INFO][4643] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" Namespace="kube-system" Pod="coredns-76f75df574-xr95c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xr95c-eth0" Feb 13 15:27:59.886389 containerd[1504]: 2025-02-13 15:27:59.856 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4fcd1c6aca ContainerID="432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" Namespace="kube-system" Pod="coredns-76f75df574-xr95c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xr95c-eth0" Feb 13 15:27:59.886389 containerd[1504]: 2025-02-13 15:27:59.868 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" Namespace="kube-system" Pod="coredns-76f75df574-xr95c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xr95c-eth0" Feb 13 15:27:59.886389 containerd[1504]: 2025-02-13 15:27:59.868 [INFO][4643] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" Namespace="kube-system" Pod="coredns-76f75df574-xr95c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xr95c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--xr95c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"18b79007-649b-4cb6-ba9e-fdcdc956535a", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99", Pod:"coredns-76f75df574-xr95c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4fcd1c6aca", MAC:"36:01:ac:1c:13:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:59.886389 containerd[1504]: 2025-02-13 15:27:59.880 [INFO][4643] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99" Namespace="kube-system" Pod="coredns-76f75df574-xr95c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xr95c-eth0" Feb 13 15:27:59.911655 systemd-networkd[1418]: calia25a22b83a4: Link UP Feb 13 15:27:59.912557 systemd[1]: Started cri-containerd-6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f.scope - libcontainer container 6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f. Feb 13 15:27:59.917707 containerd[1504]: time="2025-02-13T15:27:59.917656808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fxmwv,Uid:dbe19860-015e-4dde-822b-2e8f3262322d,Namespace:kube-system,Attempt:5,} returns sandbox id \"5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9\"" Feb 13 15:27:59.918303 systemd-networkd[1418]: calia25a22b83a4: Gained carrier Feb 13 15:27:59.918683 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:27:59.920363 kubelet[2675]: E0213 15:27:59.920295 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:59.926682 containerd[1504]: time="2025-02-13T15:27:59.926034268Z" level=info msg="CreateContainer within sandbox \"5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:27:59.933155 containerd[1504]: time="2025-02-13T15:27:59.931975892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:59.933155 containerd[1504]: time="2025-02-13T15:27:59.932057225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:59.933155 containerd[1504]: time="2025-02-13T15:27:59.932072223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:59.933155 containerd[1504]: time="2025-02-13T15:27:59.932214551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:59.946600 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.584 [INFO][4697] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.599 [INFO][4697] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9pw2x-eth0 csi-node-driver- calico-system 782fda46-0c98-43d2-919a-69ce574b5e7e 638 0 2025-02-13 15:27:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9pw2x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia25a22b83a4 [] []}} ContainerID="6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" Namespace="calico-system" Pod="csi-node-driver-9pw2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9pw2x-" Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.599 [INFO][4697] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" Namespace="calico-system" Pod="csi-node-driver-9pw2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9pw2x-eth0" Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.663 [INFO][4740] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" HandleID="k8s-pod-network.6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" Workload="localhost-k8s-csi--node--driver--9pw2x-eth0" Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.678 [INFO][4740] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" HandleID="k8s-pod-network.6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" Workload="localhost-k8s-csi--node--driver--9pw2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dc320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9pw2x", "timestamp":"2025-02-13 15:27:59.663321498 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.678 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.849 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.849 [INFO][4740] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.852 [INFO][4740] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" host="localhost" Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.863 [INFO][4740] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.871 [INFO][4740] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.874 [INFO][4740] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.881 [INFO][4740] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.882 [INFO][4740] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" host="localhost" Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.884 [INFO][4740] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.889 [INFO][4740] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" host="localhost" Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.897 [INFO][4740] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" host="localhost" Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.897 [INFO][4740] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" host="localhost" Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.897 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:27:59.960083 containerd[1504]: 2025-02-13 15:27:59.897 [INFO][4740] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" HandleID="k8s-pod-network.6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" Workload="localhost-k8s-csi--node--driver--9pw2x-eth0" Feb 13 15:27:59.960689 containerd[1504]: 2025-02-13 15:27:59.904 [INFO][4697] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" Namespace="calico-system" Pod="csi-node-driver-9pw2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9pw2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9pw2x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"782fda46-0c98-43d2-919a-69ce574b5e7e", ResourceVersion:"638", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9pw2x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia25a22b83a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:59.960689 containerd[1504]: 2025-02-13 15:27:59.905 [INFO][4697] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" Namespace="calico-system" Pod="csi-node-driver-9pw2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9pw2x-eth0" Feb 13 15:27:59.960689 containerd[1504]: 2025-02-13 15:27:59.905 [INFO][4697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia25a22b83a4 ContainerID="6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" Namespace="calico-system" Pod="csi-node-driver-9pw2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9pw2x-eth0" Feb 13 15:27:59.960689 containerd[1504]: 2025-02-13 15:27:59.920 [INFO][4697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" Namespace="calico-system" Pod="csi-node-driver-9pw2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9pw2x-eth0" Feb 13 15:27:59.960689 containerd[1504]: 2025-02-13 15:27:59.922 [INFO][4697] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" Namespace="calico-system" Pod="csi-node-driver-9pw2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9pw2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9pw2x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"782fda46-0c98-43d2-919a-69ce574b5e7e", ResourceVersion:"638", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a", Pod:"csi-node-driver-9pw2x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia25a22b83a4", MAC:"62:9a:74:d3:01:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:59.960689 containerd[1504]: 2025-02-13 15:27:59.954 [INFO][4697] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a" Namespace="calico-system" Pod="csi-node-driver-9pw2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--9pw2x-eth0" Feb 13 15:27:59.962584 systemd[1]: Started cri-containerd-432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99.scope - libcontainer container 432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99. Feb 13 15:27:59.967195 containerd[1504]: time="2025-02-13T15:27:59.967069872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-kw8hn,Uid:8ed65fe4-b6cb-4506-afa3-9bfead75ba87,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88\"" Feb 13 15:27:59.968961 containerd[1504]: time="2025-02-13T15:27:59.968940095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:27:59.979634 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:27:59.984465 containerd[1504]: time="2025-02-13T15:27:59.984225085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c78c8d9f-dzkbh,Uid:baf6ef68-8ec3-4ba8-a759-b2c5d4c63b54,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f\"" Feb 13 15:28:00.006173 containerd[1504]: time="2025-02-13T15:28:00.006118150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xr95c,Uid:18b79007-649b-4cb6-ba9e-fdcdc956535a,Namespace:kube-system,Attempt:6,} returns sandbox id \"432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99\"" Feb 13 15:28:00.006878 kubelet[2675]: E0213 15:28:00.006844 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:00.008662 containerd[1504]: time="2025-02-13T15:28:00.008632433Z" level=info msg="CreateContainer within sandbox \"432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:28:00.074829 systemd-networkd[1418]: cali764317a0c0a: Link UP Feb 13 15:28:00.076985 containerd[1504]: time="2025-02-13T15:28:00.074835268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:00.076985 containerd[1504]: time="2025-02-13T15:28:00.074909960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:00.076985 containerd[1504]: time="2025-02-13T15:28:00.074925278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:00.076985 containerd[1504]: time="2025-02-13T15:28:00.075503735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:00.075594 systemd-networkd[1418]: cali764317a0c0a: Gained carrier Feb 13 15:28:00.080664 containerd[1504]: time="2025-02-13T15:28:00.080603426Z" level=info msg="CreateContainer within sandbox \"5d9bf8d1810f2dc2e8f4a3168d2b232aaf0a1fc34acbe7e959c5f29fd2bb82c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8409c86cda3ce50b5d6b88194b792a656bd4ae0a11d4043df7c00491b87f2c84\"" Feb 13 15:28:00.085679 containerd[1504]: time="2025-02-13T15:28:00.085570638Z" level=info msg="StartContainer for \"8409c86cda3ce50b5d6b88194b792a656bd4ae0a11d4043df7c00491b87f2c84\"" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.601 [INFO][4689] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.622 [INFO][4689] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0 calico-kube-controllers-5cb78c4b87- calico-system b8166788-4099-451f-b170-059d6e53e935 747 0 2025-02-13 15:27:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cb78c4b87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5cb78c4b87-jxmw6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali764317a0c0a [] []}} ContainerID="1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" Namespace="calico-system" Pod="calico-kube-controllers-5cb78c4b87-jxmw6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.622 [INFO][4689] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" Namespace="calico-system" Pod="calico-kube-controllers-5cb78c4b87-jxmw6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.685 [INFO][4754] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" HandleID="k8s-pod-network.1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" Workload="localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.694 [INFO][4754] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" HandleID="k8s-pod-network.1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" Workload="localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5cb78c4b87-jxmw6", "timestamp":"2025-02-13 15:27:59.68567555 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.694 [INFO][4754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.897 [INFO][4754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.898 [INFO][4754] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.902 [INFO][4754] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" host="localhost" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.909 [INFO][4754] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.917 [INFO][4754] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.919 [INFO][4754] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.924 [INFO][4754] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.924 [INFO][4754] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" host="localhost" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.927 [INFO][4754] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667 Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:27:59.949 [INFO][4754] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" host="localhost" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:28:00.066 [INFO][4754] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" host="localhost" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:28:00.066 [INFO][4754] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" host="localhost" Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:28:00.066 [INFO][4754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:00.094594 containerd[1504]: 2025-02-13 15:28:00.067 [INFO][4754] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" HandleID="k8s-pod-network.1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" Workload="localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0" Feb 13 15:28:00.095190 containerd[1504]: 2025-02-13 15:28:00.071 [INFO][4689] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" Namespace="calico-system" Pod="calico-kube-controllers-5cb78c4b87-jxmw6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0", GenerateName:"calico-kube-controllers-5cb78c4b87-", Namespace:"calico-system", SelfLink:"", UID:"b8166788-4099-451f-b170-059d6e53e935", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cb78c4b87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5cb78c4b87-jxmw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali764317a0c0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:00.095190 containerd[1504]: 2025-02-13 15:28:00.071 [INFO][4689] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" Namespace="calico-system" Pod="calico-kube-controllers-5cb78c4b87-jxmw6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0" Feb 13 15:28:00.095190 containerd[1504]: 2025-02-13 15:28:00.071 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali764317a0c0a ContainerID="1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" Namespace="calico-system" Pod="calico-kube-controllers-5cb78c4b87-jxmw6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0" Feb 13 15:28:00.095190 containerd[1504]: 2025-02-13 15:28:00.076 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" Namespace="calico-system" Pod="calico-kube-controllers-5cb78c4b87-jxmw6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0" Feb 13 15:28:00.095190 containerd[1504]: 2025-02-13 15:28:00.076 [INFO][4689] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" Namespace="calico-system" Pod="calico-kube-controllers-5cb78c4b87-jxmw6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0", GenerateName:"calico-kube-controllers-5cb78c4b87-", Namespace:"calico-system", SelfLink:"", UID:"b8166788-4099-451f-b170-059d6e53e935", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cb78c4b87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667", Pod:"calico-kube-controllers-5cb78c4b87-jxmw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali764317a0c0a", MAC:"16:1e:c0:41:4f:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:00.095190 containerd[1504]: 2025-02-13 15:28:00.089 [INFO][4689] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667" Namespace="calico-system" Pod="calico-kube-controllers-5cb78c4b87-jxmw6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cb78c4b87--jxmw6-eth0" Feb 13 15:28:00.100538 systemd[1]: Started cri-containerd-6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a.scope - libcontainer container 6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a. Feb 13 15:28:00.105576 containerd[1504]: time="2025-02-13T15:28:00.105411994Z" level=info msg="CreateContainer within sandbox \"432928562f148478820fa813b543b2a655b3ffa6ec5b95fc3907898ac58e4c99\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4cf2d1a632bac013a2c2f931ed8719b00cea9642eb7449ef0cb53d03df6fd3f2\"" Feb 13 15:28:00.106417 containerd[1504]: time="2025-02-13T15:28:00.106391254Z" level=info msg="StartContainer for \"4cf2d1a632bac013a2c2f931ed8719b00cea9642eb7449ef0cb53d03df6fd3f2\"" Feb 13 15:28:00.118509 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:00.125663 systemd[1]: Started cri-containerd-8409c86cda3ce50b5d6b88194b792a656bd4ae0a11d4043df7c00491b87f2c84.scope - libcontainer container 8409c86cda3ce50b5d6b88194b792a656bd4ae0a11d4043df7c00491b87f2c84. Feb 13 15:28:00.133015 containerd[1504]: time="2025-02-13T15:28:00.132656529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:00.133015 containerd[1504]: time="2025-02-13T15:28:00.132736198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:00.133015 containerd[1504]: time="2025-02-13T15:28:00.132752799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:00.133015 containerd[1504]: time="2025-02-13T15:28:00.132849150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:00.143172 containerd[1504]: time="2025-02-13T15:28:00.143127360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9pw2x,Uid:782fda46-0c98-43d2-919a-69ce574b5e7e,Namespace:calico-system,Attempt:3,} returns sandbox id \"6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a\"" Feb 13 15:28:00.148591 systemd[1]: Started cri-containerd-4cf2d1a632bac013a2c2f931ed8719b00cea9642eb7449ef0cb53d03df6fd3f2.scope - libcontainer container 4cf2d1a632bac013a2c2f931ed8719b00cea9642eb7449ef0cb53d03df6fd3f2. Feb 13 15:28:00.160621 systemd[1]: Started cri-containerd-1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667.scope - libcontainer container 1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667. Feb 13 15:28:00.179782 containerd[1504]: time="2025-02-13T15:28:00.179699529Z" level=info msg="StartContainer for \"8409c86cda3ce50b5d6b88194b792a656bd4ae0a11d4043df7c00491b87f2c84\" returns successfully" Feb 13 15:28:00.181593 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:00.188048 containerd[1504]: time="2025-02-13T15:28:00.188005854Z" level=info msg="StartContainer for \"4cf2d1a632bac013a2c2f931ed8719b00cea9642eb7449ef0cb53d03df6fd3f2\" returns successfully" Feb 13 15:28:00.215339 containerd[1504]: time="2025-02-13T15:28:00.215227215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb78c4b87-jxmw6,Uid:b8166788-4099-451f-b170-059d6e53e935,Namespace:calico-system,Attempt:5,} returns sandbox id \"1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667\"" Feb 13 15:28:00.374831 systemd[1]: run-netns-cni\x2db540242c\x2d2531\x2de52f\x2d5743\x2d34aace0f3f6b.mount: Deactivated successfully. Feb 13 15:28:00.374954 systemd[1]: run-netns-cni\x2d58e95400\x2df31a\x2dcbc6\x2d1f35\x2d1accad758158.mount: Deactivated successfully. Feb 13 15:28:00.458897 kubelet[2675]: E0213 15:28:00.458843 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:00.487867 kubelet[2675]: E0213 15:28:00.487820 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:00.491667 kubelet[2675]: E0213 15:28:00.491634 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:00.509860 kubelet[2675]: I0213 15:28:00.509816 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fxmwv" podStartSLOduration=26.509769334 podStartE2EDuration="26.509769334s" podCreationTimestamp="2025-02-13 15:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:00.47888919 +0000 UTC m=+37.369204168" watchObservedRunningTime="2025-02-13 15:28:00.509769334 +0000 UTC m=+37.400084292" Feb 13 15:28:00.523635 kubelet[2675]: I0213 15:28:00.523579 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xr95c" podStartSLOduration=26.523532322 podStartE2EDuration="26.523532322s" podCreationTimestamp="2025-02-13 15:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:00.510660228 +0000 UTC m=+37.400975186" watchObservedRunningTime="2025-02-13 15:28:00.523532322 +0000 UTC m=+37.413847280" Feb 13 15:28:00.623422 kernel: bpftool[5320]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:28:00.865986 systemd-networkd[1418]: vxlan.calico: Link UP Feb 13 15:28:00.865996 systemd-networkd[1418]: vxlan.calico: Gained carrier Feb 13 15:28:01.266595 systemd-networkd[1418]: calib4fcd1c6aca: Gained IPv6LL Feb 13 15:28:01.458989 systemd-networkd[1418]: cali764317a0c0a: Gained IPv6LL Feb 13 15:28:01.497829 kubelet[2675]: E0213 15:28:01.497707 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:01.497829 kubelet[2675]: E0213 15:28:01.497707 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:01.650516 systemd-networkd[1418]: cali765bd28a5af: Gained IPv6LL Feb 13 15:28:01.650819 systemd-networkd[1418]: cali95bbe0b80f1: Gained IPv6LL Feb 13 15:28:01.651011 systemd-networkd[1418]: cali85e7a5b10fd: Gained IPv6LL Feb 13 15:28:01.971500 systemd-networkd[1418]: calia25a22b83a4: Gained IPv6LL Feb 13 15:28:02.162517 systemd-networkd[1418]: vxlan.calico: Gained IPv6LL Feb 13 15:28:02.499599 kubelet[2675]: E0213 15:28:02.499169 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:02.499599 kubelet[2675]: E0213 15:28:02.499273 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:03.446368 containerd[1504]: time="2025-02-13T15:28:03.446302706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:03.481259 containerd[1504]: time="2025-02-13T15:28:03.481172886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 15:28:03.501504 containerd[1504]: time="2025-02-13T15:28:03.501436202Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:03.501666 kubelet[2675]: E0213 15:28:03.501467 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:03.505440 containerd[1504]: time="2025-02-13T15:28:03.505396561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:03.506146 containerd[1504]: time="2025-02-13T15:28:03.506097557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.537057394s" Feb 13 15:28:03.506146 containerd[1504]: time="2025-02-13T15:28:03.506133314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 15:28:03.506630 containerd[1504]: time="2025-02-13T15:28:03.506596435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:28:03.507761 containerd[1504]: time="2025-02-13T15:28:03.507732288Z" level=info msg="CreateContainer within sandbox \"b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:28:03.528374 containerd[1504]: time="2025-02-13T15:28:03.528313171Z" level=info msg="CreateContainer within sandbox \"b0cb798de78ac8fabc340c38d40c935c25f30294c6c68f451f59d1e89fdabd88\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a7bd73a02bda6aa64c0232aafea4f291a4139bce11b90f71da0c667801372bea\"" Feb 13 15:28:03.528969 containerd[1504]: time="2025-02-13T15:28:03.528909551Z" level=info msg="StartContainer for \"a7bd73a02bda6aa64c0232aafea4f291a4139bce11b90f71da0c667801372bea\"" Feb 13 15:28:03.565532 systemd[1]: Started cri-containerd-a7bd73a02bda6aa64c0232aafea4f291a4139bce11b90f71da0c667801372bea.scope - libcontainer container a7bd73a02bda6aa64c0232aafea4f291a4139bce11b90f71da0c667801372bea. Feb 13 15:28:03.615813 containerd[1504]: time="2025-02-13T15:28:03.615760347Z" level=info msg="StartContainer for \"a7bd73a02bda6aa64c0232aafea4f291a4139bce11b90f71da0c667801372bea\" returns successfully" Feb 13 15:28:04.039851 containerd[1504]: time="2025-02-13T15:28:04.039783046Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:04.040945 containerd[1504]: time="2025-02-13T15:28:04.040904893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:28:04.042907 containerd[1504]: time="2025-02-13T15:28:04.042874251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 536.245776ms" Feb 13 15:28:04.042907 containerd[1504]: time="2025-02-13T15:28:04.042905581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 15:28:04.044319 containerd[1504]: time="2025-02-13T15:28:04.043483195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:28:04.045314 containerd[1504]: time="2025-02-13T15:28:04.045282725Z" level=info msg="CreateContainer within sandbox \"6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:28:04.060678 containerd[1504]: time="2025-02-13T15:28:04.060640426Z" level=info msg="CreateContainer within sandbox \"6c1b235eb96685f7f416fe3717d04f629c3349470353bb6fce8f7ca330ff434f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4677f2de0d80e0bd2599828f2eef49763cbaea54e2cace7fdd9c74179f83887b\"" Feb 13 15:28:04.061288 containerd[1504]: time="2025-02-13T15:28:04.061256042Z" level=info msg="StartContainer for \"4677f2de0d80e0bd2599828f2eef49763cbaea54e2cace7fdd9c74179f83887b\"" Feb 13 15:28:04.090495 systemd[1]: Started cri-containerd-4677f2de0d80e0bd2599828f2eef49763cbaea54e2cace7fdd9c74179f83887b.scope - libcontainer container 4677f2de0d80e0bd2599828f2eef49763cbaea54e2cace7fdd9c74179f83887b. Feb 13 15:28:04.137304 containerd[1504]: time="2025-02-13T15:28:04.137226390Z" level=info msg="StartContainer for \"4677f2de0d80e0bd2599828f2eef49763cbaea54e2cace7fdd9c74179f83887b\" returns successfully" Feb 13 15:28:04.517067 kubelet[2675]: I0213 15:28:04.516971 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75c78c8d9f-dzkbh" podStartSLOduration=20.459472938 podStartE2EDuration="24.516927489s" podCreationTimestamp="2025-02-13 15:27:40 +0000 UTC" firstStartedPulling="2025-02-13 15:27:59.985773695 +0000 UTC m=+36.876088653" lastFinishedPulling="2025-02-13 15:28:04.043228246 +0000 UTC m=+40.933543204" observedRunningTime="2025-02-13 15:28:04.515771298 +0000 UTC m=+41.406086256" watchObservedRunningTime="2025-02-13 15:28:04.516927489 +0000 UTC m=+41.407242447" Feb 13 15:28:04.535492 systemd[1]: Started sshd@9-10.0.0.70:22-10.0.0.1:35970.service - OpenSSH per-connection server daemon (10.0.0.1:35970). Feb 13 15:28:04.593694 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 35970 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:04.594095 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:04.598065 systemd-logind[1483]: New session 10 of user core. Feb 13 15:28:04.602470 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:28:04.748778 sshd[5497]: Connection closed by 10.0.0.1 port 35970 Feb 13 15:28:04.749507 sshd-session[5492]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:04.753490 systemd[1]: sshd@9-10.0.0.70:22-10.0.0.1:35970.service: Deactivated successfully. Feb 13 15:28:04.756067 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:28:04.758459 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:28:04.759494 systemd-logind[1483]: Removed session 10. Feb 13 15:28:05.516779 kubelet[2675]: I0213 15:28:05.516734 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:05.516779 kubelet[2675]: I0213 15:28:05.516760 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:05.988524 containerd[1504]: time="2025-02-13T15:28:05.988443821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:05.989404 containerd[1504]: time="2025-02-13T15:28:05.989326779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 15:28:05.991575 containerd[1504]: time="2025-02-13T15:28:05.991535396Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:05.995968 containerd[1504]: time="2025-02-13T15:28:05.995905272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:05.996656 containerd[1504]: time="2025-02-13T15:28:05.996631044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.953125177s" Feb 13 15:28:05.996701 containerd[1504]: time="2025-02-13T15:28:05.996656823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 15:28:05.997505 containerd[1504]: time="2025-02-13T15:28:05.997484258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:28:05.998625 containerd[1504]: time="2025-02-13T15:28:05.998570837Z" level=info msg="CreateContainer within sandbox \"6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:28:06.025199 containerd[1504]: time="2025-02-13T15:28:06.025140796Z" level=info msg="CreateContainer within sandbox \"6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0c7cb596c26e767e5c2dc71bfa4777af58b1964c357205130cec382d633c2413\"" Feb 13 15:28:06.025759 containerd[1504]: time="2025-02-13T15:28:06.025716137Z" level=info msg="StartContainer for \"0c7cb596c26e767e5c2dc71bfa4777af58b1964c357205130cec382d633c2413\"" Feb 13 15:28:06.056923 systemd[1]: run-containerd-runc-k8s.io-0c7cb596c26e767e5c2dc71bfa4777af58b1964c357205130cec382d633c2413-runc.GnbIKg.mount: Deactivated successfully. Feb 13 15:28:06.065516 systemd[1]: Started cri-containerd-0c7cb596c26e767e5c2dc71bfa4777af58b1964c357205130cec382d633c2413.scope - libcontainer container 0c7cb596c26e767e5c2dc71bfa4777af58b1964c357205130cec382d633c2413. Feb 13 15:28:06.218923 containerd[1504]: time="2025-02-13T15:28:06.218871442Z" level=info msg="StartContainer for \"0c7cb596c26e767e5c2dc71bfa4777af58b1964c357205130cec382d633c2413\" returns successfully" Feb 13 15:28:09.107186 containerd[1504]: time="2025-02-13T15:28:09.107114405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:09.108432 containerd[1504]: time="2025-02-13T15:28:09.108390841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 15:28:09.110138 containerd[1504]: time="2025-02-13T15:28:09.110107494Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:09.112680 containerd[1504]: time="2025-02-13T15:28:09.112630851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:09.113278 containerd[1504]: time="2025-02-13T15:28:09.113252207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.115741039s" Feb 13 15:28:09.113316 containerd[1504]: time="2025-02-13T15:28:09.113281282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 15:28:09.114401 containerd[1504]: time="2025-02-13T15:28:09.114180289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:28:09.126396 containerd[1504]: time="2025-02-13T15:28:09.125793710Z" level=info msg="CreateContainer within sandbox \"1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:28:09.144136 containerd[1504]: time="2025-02-13T15:28:09.144077734Z" level=info msg="CreateContainer within sandbox \"1cbc34887c5f80ffe3dd1e9bdfaa1397be93d5bcdd409f37fa73201082ba7667\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"47bb79f262a9266a84dab3a2d4d6062a38f8aba271a214e03053f4659886b909\"" Feb 13 15:28:09.145099 containerd[1504]: time="2025-02-13T15:28:09.144952766Z" level=info msg="StartContainer for \"47bb79f262a9266a84dab3a2d4d6062a38f8aba271a214e03053f4659886b909\"" Feb 13 15:28:09.179611 systemd[1]: Started cri-containerd-47bb79f262a9266a84dab3a2d4d6062a38f8aba271a214e03053f4659886b909.scope - libcontainer container 47bb79f262a9266a84dab3a2d4d6062a38f8aba271a214e03053f4659886b909. Feb 13 15:28:09.232116 containerd[1504]: time="2025-02-13T15:28:09.231522888Z" level=info msg="StartContainer for \"47bb79f262a9266a84dab3a2d4d6062a38f8aba271a214e03053f4659886b909\" returns successfully" Feb 13 15:28:09.540042 kubelet[2675]: I0213 15:28:09.539982 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75c78c8d9f-kw8hn" podStartSLOduration=26.002130543 podStartE2EDuration="29.539939158s" podCreationTimestamp="2025-02-13 15:27:40 +0000 UTC" firstStartedPulling="2025-02-13 15:27:59.968629702 +0000 UTC m=+36.858944660" lastFinishedPulling="2025-02-13 15:28:03.506438297 +0000 UTC m=+40.396753275" observedRunningTime="2025-02-13 15:28:04.527857955 +0000 UTC m=+41.418172914" watchObservedRunningTime="2025-02-13 15:28:09.539939158 +0000 UTC m=+46.430254116" Feb 13 15:28:09.541074 kubelet[2675]: I0213 15:28:09.540380 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5cb78c4b87-jxmw6" podStartSLOduration=20.64296466 podStartE2EDuration="29.540326344s" podCreationTimestamp="2025-02-13 15:27:40 +0000 UTC" firstStartedPulling="2025-02-13 15:28:00.216275565 +0000 UTC m=+37.106590523" lastFinishedPulling="2025-02-13 15:28:09.113637249 +0000 UTC m=+46.003952207" observedRunningTime="2025-02-13 15:28:09.539177227 +0000 UTC m=+46.429492195" watchObservedRunningTime="2025-02-13 15:28:09.540326344 +0000 UTC m=+46.430641302" Feb 13 15:28:09.761314 systemd[1]: Started sshd@10-10.0.0.70:22-10.0.0.1:35976.service - OpenSSH per-connection server daemon (10.0.0.1:35976). Feb 13 15:28:09.813438 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 35976 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:09.815122 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:09.819472 systemd-logind[1483]: New session 11 of user core. Feb 13 15:28:09.829532 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:28:09.952601 sshd[5607]: Connection closed by 10.0.0.1 port 35976 Feb 13 15:28:09.951905 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:09.970163 systemd[1]: sshd@10-10.0.0.70:22-10.0.0.1:35976.service: Deactivated successfully. Feb 13 15:28:09.972398 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:28:09.973180 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:28:09.984735 systemd[1]: Started sshd@11-10.0.0.70:22-10.0.0.1:35988.service - OpenSSH per-connection server daemon (10.0.0.1:35988). Feb 13 15:28:09.985552 systemd-logind[1483]: Removed session 11. Feb 13 15:28:10.034379 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 35988 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:10.036308 sshd-session[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:10.040819 systemd-logind[1483]: New session 12 of user core. Feb 13 15:28:10.051524 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:28:10.324764 sshd[5623]: Connection closed by 10.0.0.1 port 35988 Feb 13 15:28:10.325218 sshd-session[5621]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:10.337105 systemd[1]: sshd@11-10.0.0.70:22-10.0.0.1:35988.service: Deactivated successfully. Feb 13 15:28:10.339891 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:28:10.342206 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:28:10.349845 systemd[1]: Started sshd@12-10.0.0.70:22-10.0.0.1:35994.service - OpenSSH per-connection server daemon (10.0.0.1:35994). Feb 13 15:28:10.351100 systemd-logind[1483]: Removed session 12. Feb 13 15:28:10.385697 sshd[5633]: Accepted publickey for core from 10.0.0.1 port 35994 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:10.387332 sshd-session[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:10.391552 systemd-logind[1483]: New session 13 of user core. Feb 13 15:28:10.397537 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:28:10.532388 kubelet[2675]: I0213 15:28:10.532338 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:10.533211 sshd[5635]: Connection closed by 10.0.0.1 port 35994 Feb 13 15:28:10.533775 sshd-session[5633]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:10.538057 systemd[1]: sshd@12-10.0.0.70:22-10.0.0.1:35994.service: Deactivated successfully. Feb 13 15:28:10.540380 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:28:10.541002 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:28:10.541873 systemd-logind[1483]: Removed session 13. Feb 13 15:28:11.354344 containerd[1504]: time="2025-02-13T15:28:11.354247300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:11.355070 containerd[1504]: time="2025-02-13T15:28:11.354998690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 15:28:11.356178 containerd[1504]: time="2025-02-13T15:28:11.356140514Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:11.358544 containerd[1504]: time="2025-02-13T15:28:11.358502537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:11.359297 containerd[1504]: time="2025-02-13T15:28:11.359266090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.245051708s" Feb 13 15:28:11.359340 containerd[1504]: time="2025-02-13T15:28:11.359301867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 15:28:11.361328 containerd[1504]: time="2025-02-13T15:28:11.361283878Z" level=info msg="CreateContainer within sandbox \"6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:28:11.380301 containerd[1504]: time="2025-02-13T15:28:11.380222777Z" level=info msg="CreateContainer within sandbox \"6a8ad077ea14c734c913be540a09dd6a7722c2a59994ce372a70d55b16b1e03a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"16bca4c48f5f7aab5f94ac41d9dc4cee5e29d51b2b350eac78a5a05bdd271131\"" Feb 13 15:28:11.380994 containerd[1504]: time="2025-02-13T15:28:11.380940133Z" level=info msg="StartContainer for \"16bca4c48f5f7aab5f94ac41d9dc4cee5e29d51b2b350eac78a5a05bdd271131\"" Feb 13 15:28:11.420595 systemd[1]: Started cri-containerd-16bca4c48f5f7aab5f94ac41d9dc4cee5e29d51b2b350eac78a5a05bdd271131.scope - libcontainer container 16bca4c48f5f7aab5f94ac41d9dc4cee5e29d51b2b350eac78a5a05bdd271131. Feb 13 15:28:11.471215 containerd[1504]: time="2025-02-13T15:28:11.471166428Z" level=info msg="StartContainer for \"16bca4c48f5f7aab5f94ac41d9dc4cee5e29d51b2b350eac78a5a05bdd271131\" returns successfully" Feb 13 15:28:11.550140 kubelet[2675]: I0213 15:28:11.550077 2675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-9pw2x" podStartSLOduration=20.334907504 podStartE2EDuration="31.550024845s" podCreationTimestamp="2025-02-13 15:27:40 +0000 UTC" firstStartedPulling="2025-02-13 15:28:00.144485632 +0000 UTC m=+37.034800590" lastFinishedPulling="2025-02-13 15:28:11.359602973 +0000 UTC m=+48.249917931" observedRunningTime="2025-02-13 15:28:11.549494671 +0000 UTC m=+48.439809639" watchObservedRunningTime="2025-02-13 15:28:11.550024845 +0000 UTC m=+48.440339804" Feb 13 15:28:12.318868 kubelet[2675]: I0213 15:28:12.318283 2675 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:28:12.322388 kubelet[2675]: I0213 15:28:12.320546 2675 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:28:15.545247 systemd[1]: Started sshd@13-10.0.0.70:22-10.0.0.1:38568.service - OpenSSH per-connection server daemon (10.0.0.1:38568). Feb 13 15:28:15.596205 sshd[5700]: Accepted publickey for core from 10.0.0.1 port 38568 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:15.598188 sshd-session[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:15.602591 systemd-logind[1483]: New session 14 of user core. Feb 13 15:28:15.620511 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:28:15.764624 sshd[5702]: Connection closed by 10.0.0.1 port 38568 Feb 13 15:28:15.764958 sshd-session[5700]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:15.771408 systemd[1]: sshd@13-10.0.0.70:22-10.0.0.1:38568.service: Deactivated successfully. Feb 13 15:28:15.773879 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:28:15.774341 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:28:15.775252 systemd-logind[1483]: Removed session 14. Feb 13 15:28:17.246775 kubelet[2675]: I0213 15:28:17.246731 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:17.464539 kubelet[2675]: E0213 15:28:17.464494 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:20.786086 systemd[1]: Started sshd@14-10.0.0.70:22-10.0.0.1:38582.service - OpenSSH per-connection server daemon (10.0.0.1:38582). Feb 13 15:28:20.850143 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 38582 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:20.852646 sshd-session[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:20.857848 systemd-logind[1483]: New session 15 of user core. Feb 13 15:28:20.865518 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:28:21.010470 sshd[5781]: Connection closed by 10.0.0.1 port 38582 Feb 13 15:28:21.010915 sshd-session[5779]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:21.015697 systemd[1]: sshd@14-10.0.0.70:22-10.0.0.1:38582.service: Deactivated successfully. Feb 13 15:28:21.017937 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:28:21.018631 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:28:21.019756 systemd-logind[1483]: Removed session 15. Feb 13 15:28:23.196966 containerd[1504]: time="2025-02-13T15:28:23.196726298Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\"" Feb 13 15:28:23.196966 containerd[1504]: time="2025-02-13T15:28:23.196867643Z" level=info msg="TearDown network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" successfully" Feb 13 15:28:23.196966 containerd[1504]: time="2025-02-13T15:28:23.196881599Z" level=info msg="StopPodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" returns successfully" Feb 13 15:28:23.203820 containerd[1504]: time="2025-02-13T15:28:23.203767057Z" level=info msg="RemovePodSandbox for \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\"" Feb 13 15:28:23.215417 containerd[1504]: time="2025-02-13T15:28:23.215381306Z" level=info msg="Forcibly stopping sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\"" Feb 13 15:28:23.215548 containerd[1504]: time="2025-02-13T15:28:23.215491914Z" level=info msg="TearDown network for sandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" successfully" Feb 13 15:28:23.224289 containerd[1504]: time="2025-02-13T15:28:23.224257229Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.224372 containerd[1504]: time="2025-02-13T15:28:23.224313104Z" level=info msg="RemovePodSandbox \"cc6c2977da6e23476e309d3d5de792b1b506e0af129ebac700a35091b8abfba8\" returns successfully" Feb 13 15:28:23.224814 containerd[1504]: time="2025-02-13T15:28:23.224773587Z" level=info msg="StopPodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\"" Feb 13 15:28:23.224908 containerd[1504]: time="2025-02-13T15:28:23.224885267Z" level=info msg="TearDown network for sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" successfully" Feb 13 15:28:23.224908 containerd[1504]: time="2025-02-13T15:28:23.224903952Z" level=info msg="StopPodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" returns successfully" Feb 13 15:28:23.225183 containerd[1504]: time="2025-02-13T15:28:23.225152568Z" level=info msg="RemovePodSandbox for \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\"" Feb 13 15:28:23.225183 containerd[1504]: time="2025-02-13T15:28:23.225180831Z" level=info msg="Forcibly stopping sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\"" Feb 13 15:28:23.225301 containerd[1504]: time="2025-02-13T15:28:23.225259810Z" level=info msg="TearDown network for sandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" successfully" Feb 13 15:28:23.229177 containerd[1504]: time="2025-02-13T15:28:23.229148706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.229263 containerd[1504]: time="2025-02-13T15:28:23.229193180Z" level=info msg="RemovePodSandbox \"ecf918f180e0ace30fa29f1022e923efd9f46e0a58757e29098705bdf6d6b486\" returns successfully" Feb 13 15:28:23.229549 containerd[1504]: time="2025-02-13T15:28:23.229524200Z" level=info msg="StopPodSandbox for \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\"" Feb 13 15:28:23.229684 containerd[1504]: time="2025-02-13T15:28:23.229652852Z" level=info msg="TearDown network for sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\" successfully" Feb 13 15:28:23.229684 containerd[1504]: time="2025-02-13T15:28:23.229671907Z" level=info msg="StopPodSandbox for \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\" returns successfully" Feb 13 15:28:23.229993 containerd[1504]: time="2025-02-13T15:28:23.229967712Z" level=info msg="RemovePodSandbox for \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\"" Feb 13 15:28:23.230073 containerd[1504]: time="2025-02-13T15:28:23.229994693Z" level=info msg="Forcibly stopping sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\"" Feb 13 15:28:23.230156 containerd[1504]: time="2025-02-13T15:28:23.230107835Z" level=info msg="TearDown network for sandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\" successfully" Feb 13 15:28:23.234270 containerd[1504]: time="2025-02-13T15:28:23.234234236Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.234373 containerd[1504]: time="2025-02-13T15:28:23.234281205Z" level=info msg="RemovePodSandbox \"983961175ec9df6fd35cdb28a6e85dc2e967a9a2b91e4c50ffb3224d936bd740\" returns successfully" Feb 13 15:28:23.234783 containerd[1504]: time="2025-02-13T15:28:23.234610854Z" level=info msg="StopPodSandbox for \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\"" Feb 13 15:28:23.234783 containerd[1504]: time="2025-02-13T15:28:23.234711663Z" level=info msg="TearDown network for sandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\" successfully" Feb 13 15:28:23.234783 containerd[1504]: time="2025-02-13T15:28:23.234724767Z" level=info msg="StopPodSandbox for \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\" returns successfully" Feb 13 15:28:23.235098 containerd[1504]: time="2025-02-13T15:28:23.235054305Z" level=info msg="RemovePodSandbox for \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\"" Feb 13 15:28:23.235151 containerd[1504]: time="2025-02-13T15:28:23.235105652Z" level=info msg="Forcibly stopping sandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\"" Feb 13 15:28:23.235262 containerd[1504]: time="2025-02-13T15:28:23.235210608Z" level=info msg="TearDown network for sandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\" successfully" Feb 13 15:28:23.239783 containerd[1504]: time="2025-02-13T15:28:23.239717835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.239858 containerd[1504]: time="2025-02-13T15:28:23.239783919Z" level=info msg="RemovePodSandbox \"4a35f6ad31639ad006c76fa1b29dfa20cac1c6ffcfc0339024c55fcfa6bdf436\" returns successfully" Feb 13 15:28:23.240129 containerd[1504]: time="2025-02-13T15:28:23.240076568Z" level=info msg="StopPodSandbox for \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\"" Feb 13 15:28:23.240182 containerd[1504]: time="2025-02-13T15:28:23.240170284Z" level=info msg="TearDown network for sandbox \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\" successfully" Feb 13 15:28:23.240214 containerd[1504]: time="2025-02-13T15:28:23.240180573Z" level=info msg="StopPodSandbox for \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\" returns successfully" Feb 13 15:28:23.241382 containerd[1504]: time="2025-02-13T15:28:23.240478502Z" level=info msg="RemovePodSandbox for \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\"" Feb 13 15:28:23.241382 containerd[1504]: time="2025-02-13T15:28:23.240511914Z" level=info msg="Forcibly stopping sandbox \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\"" Feb 13 15:28:23.241382 containerd[1504]: time="2025-02-13T15:28:23.240612984Z" level=info msg="TearDown network for sandbox \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\" successfully" Feb 13 15:28:23.245378 containerd[1504]: time="2025-02-13T15:28:23.245328852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.245448 containerd[1504]: time="2025-02-13T15:28:23.245390047Z" level=info msg="RemovePodSandbox \"6bbc519442a9a516704417d02151c7faaa5fbb47f963e42a0e3da776c7e0251f\" returns successfully" Feb 13 15:28:23.245691 containerd[1504]: time="2025-02-13T15:28:23.245649142Z" level=info msg="StopPodSandbox for \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\"" Feb 13 15:28:23.245770 containerd[1504]: time="2025-02-13T15:28:23.245747868Z" level=info msg="TearDown network for sandbox \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\" successfully" Feb 13 15:28:23.245819 containerd[1504]: time="2025-02-13T15:28:23.245768366Z" level=info msg="StopPodSandbox for \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\" returns successfully" Feb 13 15:28:23.246079 containerd[1504]: time="2025-02-13T15:28:23.246045927Z" level=info msg="RemovePodSandbox for \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\"" Feb 13 15:28:23.246143 containerd[1504]: time="2025-02-13T15:28:23.246080511Z" level=info msg="Forcibly stopping sandbox \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\"" Feb 13 15:28:23.246219 containerd[1504]: time="2025-02-13T15:28:23.246175220Z" level=info msg="TearDown network for sandbox \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\" successfully" Feb 13 15:28:23.250681 containerd[1504]: time="2025-02-13T15:28:23.250646829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.250753 containerd[1504]: time="2025-02-13T15:28:23.250692555Z" level=info msg="RemovePodSandbox \"95aedfc2838ca21b6039d2be51b217971b54151c0332a6be625e4cbd5ffa413d\" returns successfully" Feb 13 15:28:23.250941 containerd[1504]: time="2025-02-13T15:28:23.250911615Z" level=info msg="StopPodSandbox for \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\"" Feb 13 15:28:23.251049 containerd[1504]: time="2025-02-13T15:28:23.251019909Z" level=info msg="TearDown network for sandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\" successfully" Feb 13 15:28:23.251049 containerd[1504]: time="2025-02-13T15:28:23.251038343Z" level=info msg="StopPodSandbox for \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\" returns successfully" Feb 13 15:28:23.251322 containerd[1504]: time="2025-02-13T15:28:23.251293662Z" level=info msg="RemovePodSandbox for \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\"" Feb 13 15:28:23.251322 containerd[1504]: time="2025-02-13T15:28:23.251314721Z" level=info msg="Forcibly stopping sandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\"" Feb 13 15:28:23.251433 containerd[1504]: time="2025-02-13T15:28:23.251397908Z" level=info msg="TearDown network for sandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\" successfully" Feb 13 15:28:23.255033 containerd[1504]: time="2025-02-13T15:28:23.255005215Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.255104 containerd[1504]: time="2025-02-13T15:28:23.255051001Z" level=info msg="RemovePodSandbox \"641230267db02743560ee14e0a10c4f7f522ab6ce014b37b72abfe6828e3961a\" returns successfully" Feb 13 15:28:23.255341 containerd[1504]: time="2025-02-13T15:28:23.255306871Z" level=info msg="StopPodSandbox for \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\"" Feb 13 15:28:23.255484 containerd[1504]: time="2025-02-13T15:28:23.255422599Z" level=info msg="TearDown network for sandbox \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\" successfully" Feb 13 15:28:23.255484 containerd[1504]: time="2025-02-13T15:28:23.255438329Z" level=info msg="StopPodSandbox for \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\" returns successfully" Feb 13 15:28:23.255768 containerd[1504]: time="2025-02-13T15:28:23.255747348Z" level=info msg="RemovePodSandbox for \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\"" Feb 13 15:28:23.255800 containerd[1504]: time="2025-02-13T15:28:23.255767907Z" level=info msg="Forcibly stopping sandbox \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\"" Feb 13 15:28:23.255866 containerd[1504]: time="2025-02-13T15:28:23.255831827Z" level=info msg="TearDown network for sandbox \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\" successfully" Feb 13 15:28:23.259953 containerd[1504]: time="2025-02-13T15:28:23.259917091Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.260037 containerd[1504]: time="2025-02-13T15:28:23.259959270Z" level=info msg="RemovePodSandbox \"525630a2740e82aeda00a65e4f0ca4323d0b4daeb242d27e41df6bffab3b7044\" returns successfully" Feb 13 15:28:23.260270 containerd[1504]: time="2025-02-13T15:28:23.260226471Z" level=info msg="StopPodSandbox for \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\"" Feb 13 15:28:23.260383 containerd[1504]: time="2025-02-13T15:28:23.260335045Z" level=info msg="TearDown network for sandbox \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\" successfully" Feb 13 15:28:23.260383 containerd[1504]: time="2025-02-13T15:28:23.260380029Z" level=info msg="StopPodSandbox for \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\" returns successfully" Feb 13 15:28:23.260641 containerd[1504]: time="2025-02-13T15:28:23.260609770Z" level=info msg="RemovePodSandbox for \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\"" Feb 13 15:28:23.260641 containerd[1504]: time="2025-02-13T15:28:23.260632893Z" level=info msg="Forcibly stopping sandbox \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\"" Feb 13 15:28:23.260741 containerd[1504]: time="2025-02-13T15:28:23.260703296Z" level=info msg="TearDown network for sandbox \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\" successfully" Feb 13 15:28:23.264238 containerd[1504]: time="2025-02-13T15:28:23.264204023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.264302 containerd[1504]: time="2025-02-13T15:28:23.264246343Z" level=info msg="RemovePodSandbox \"e4969b3b0710d09cb8ab8e70cd3b57231211fda1dcf333cccbaa8c07bd77595c\" returns successfully" Feb 13 15:28:23.264544 containerd[1504]: time="2025-02-13T15:28:23.264519886Z" level=info msg="StopPodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\"" Feb 13 15:28:23.264637 containerd[1504]: time="2025-02-13T15:28:23.264620015Z" level=info msg="TearDown network for sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" successfully" Feb 13 15:28:23.264662 containerd[1504]: time="2025-02-13T15:28:23.264634842Z" level=info msg="StopPodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" returns successfully" Feb 13 15:28:23.264955 containerd[1504]: time="2025-02-13T15:28:23.264932761Z" level=info msg="RemovePodSandbox for \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\"" Feb 13 15:28:23.264998 containerd[1504]: time="2025-02-13T15:28:23.264958880Z" level=info msg="Forcibly stopping sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\"" Feb 13 15:28:23.265052 containerd[1504]: time="2025-02-13T15:28:23.265026677Z" level=info msg="TearDown network for sandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" successfully" Feb 13 15:28:23.269337 containerd[1504]: time="2025-02-13T15:28:23.269311035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.269401 containerd[1504]: time="2025-02-13T15:28:23.269367771Z" level=info msg="RemovePodSandbox \"a85584e23c5fb30dcd28c4103db1130b1d191d16c65a99dcc82e42d9424bc447\" returns successfully" Feb 13 15:28:23.269647 containerd[1504]: time="2025-02-13T15:28:23.269620505Z" level=info msg="StopPodSandbox for \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\"" Feb 13 15:28:23.269740 containerd[1504]: time="2025-02-13T15:28:23.269718579Z" level=info msg="TearDown network for sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\" successfully" Feb 13 15:28:23.269784 containerd[1504]: time="2025-02-13T15:28:23.269738767Z" level=info msg="StopPodSandbox for \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\" returns successfully" Feb 13 15:28:23.270037 containerd[1504]: time="2025-02-13T15:28:23.270000729Z" level=info msg="RemovePodSandbox for \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\"" Feb 13 15:28:23.270037 containerd[1504]: time="2025-02-13T15:28:23.270024334Z" level=info msg="Forcibly stopping sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\"" Feb 13 15:28:23.270144 containerd[1504]: time="2025-02-13T15:28:23.270104404Z" level=info msg="TearDown network for sandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\" successfully" Feb 13 15:28:23.274301 containerd[1504]: time="2025-02-13T15:28:23.274257766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.274377 containerd[1504]: time="2025-02-13T15:28:23.274303011Z" level=info msg="RemovePodSandbox \"90bbd54917443ac7f7a64d891550fb2be68c956e1ce3b927822b4ff3bbb84a02\" returns successfully" Feb 13 15:28:23.274664 containerd[1504]: time="2025-02-13T15:28:23.274628080Z" level=info msg="StopPodSandbox for \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\"" Feb 13 15:28:23.274746 containerd[1504]: time="2025-02-13T15:28:23.274721806Z" level=info msg="TearDown network for sandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\" successfully" Feb 13 15:28:23.274746 containerd[1504]: time="2025-02-13T15:28:23.274741844Z" level=info msg="StopPodSandbox for \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\" returns successfully" Feb 13 15:28:23.277062 containerd[1504]: time="2025-02-13T15:28:23.275023752Z" level=info msg="RemovePodSandbox for \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\"" Feb 13 15:28:23.277062 containerd[1504]: time="2025-02-13T15:28:23.275053750Z" level=info msg="Forcibly stopping sandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\"" Feb 13 15:28:23.277062 containerd[1504]: time="2025-02-13T15:28:23.275128971Z" level=info msg="TearDown network for sandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\" successfully" Feb 13 15:28:23.279445 containerd[1504]: time="2025-02-13T15:28:23.279418288Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.279502 containerd[1504]: time="2025-02-13T15:28:23.279453343Z" level=info msg="RemovePodSandbox \"a0959c1712ffabfb4a90b06e60a8e747651298d92f72a60e5441a31d6f2d2302\" returns successfully" Feb 13 15:28:23.279947 containerd[1504]: time="2025-02-13T15:28:23.279742977Z" level=info msg="StopPodSandbox for \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\"" Feb 13 15:28:23.279947 containerd[1504]: time="2025-02-13T15:28:23.279836141Z" level=info msg="TearDown network for sandbox \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\" successfully" Feb 13 15:28:23.279947 containerd[1504]: time="2025-02-13T15:28:23.279872489Z" level=info msg="StopPodSandbox for \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\" returns successfully" Feb 13 15:28:23.280110 containerd[1504]: time="2025-02-13T15:28:23.280082263Z" level=info msg="RemovePodSandbox for \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\"" Feb 13 15:28:23.280110 containerd[1504]: time="2025-02-13T15:28:23.280106408Z" level=info msg="Forcibly stopping sandbox \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\"" Feb 13 15:28:23.280236 containerd[1504]: time="2025-02-13T15:28:23.280175357Z" level=info msg="TearDown network for sandbox \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\" successfully" Feb 13 15:28:23.283787 containerd[1504]: time="2025-02-13T15:28:23.283746698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.283862 containerd[1504]: time="2025-02-13T15:28:23.283805779Z" level=info msg="RemovePodSandbox \"f7f358c58822783d20013a10f1b63215693506dc7c1b2b87d27d160c4ec475d7\" returns successfully" Feb 13 15:28:23.284058 containerd[1504]: time="2025-02-13T15:28:23.284033756Z" level=info msg="StopPodSandbox for \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\"" Feb 13 15:28:23.284141 containerd[1504]: time="2025-02-13T15:28:23.284123394Z" level=info msg="TearDown network for sandbox \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\" successfully" Feb 13 15:28:23.284141 containerd[1504]: time="2025-02-13T15:28:23.284138162Z" level=info msg="StopPodSandbox for \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\" returns successfully" Feb 13 15:28:23.284427 containerd[1504]: time="2025-02-13T15:28:23.284406336Z" level=info msg="RemovePodSandbox for \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\"" Feb 13 15:28:23.284427 containerd[1504]: time="2025-02-13T15:28:23.284425412Z" level=info msg="Forcibly stopping sandbox \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\"" Feb 13 15:28:23.284508 containerd[1504]: time="2025-02-13T15:28:23.284488710Z" level=info msg="TearDown network for sandbox \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\" successfully" Feb 13 15:28:23.288030 containerd[1504]: time="2025-02-13T15:28:23.287984368Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.288030 containerd[1504]: time="2025-02-13T15:28:23.288022500Z" level=info msg="RemovePodSandbox \"769dcb1ff55768ade1a54d0d5ac7ec0ff262206933670e0133ba4f37a61e0d7c\" returns successfully" Feb 13 15:28:23.288286 containerd[1504]: time="2025-02-13T15:28:23.288254696Z" level=info msg="StopPodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\"" Feb 13 15:28:23.288397 containerd[1504]: time="2025-02-13T15:28:23.288379951Z" level=info msg="TearDown network for sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" successfully" Feb 13 15:28:23.288397 containerd[1504]: time="2025-02-13T15:28:23.288394769Z" level=info msg="StopPodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" returns successfully" Feb 13 15:28:23.288677 containerd[1504]: time="2025-02-13T15:28:23.288646371Z" level=info msg="RemovePodSandbox for \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\"" Feb 13 15:28:23.288677 containerd[1504]: time="2025-02-13T15:28:23.288676206Z" level=info msg="Forcibly stopping sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\"" Feb 13 15:28:23.288774 containerd[1504]: time="2025-02-13T15:28:23.288745937Z" level=info msg="TearDown network for sandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" successfully" Feb 13 15:28:23.292470 containerd[1504]: time="2025-02-13T15:28:23.292432975Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.292470 containerd[1504]: time="2025-02-13T15:28:23.292469603Z" level=info msg="RemovePodSandbox \"6f8b47abfa21019b06bffd52c11095757fd8f7f5aeff9e1fa3154e306b9bf12f\" returns successfully" Feb 13 15:28:23.292848 containerd[1504]: time="2025-02-13T15:28:23.292816945Z" level=info msg="StopPodSandbox for \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\"" Feb 13 15:28:23.292943 containerd[1504]: time="2025-02-13T15:28:23.292915339Z" level=info msg="TearDown network for sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\" successfully" Feb 13 15:28:23.292943 containerd[1504]: time="2025-02-13T15:28:23.292935968Z" level=info msg="StopPodSandbox for \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\" returns successfully" Feb 13 15:28:23.293291 containerd[1504]: time="2025-02-13T15:28:23.293260867Z" level=info msg="RemovePodSandbox for \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\"" Feb 13 15:28:23.293325 containerd[1504]: time="2025-02-13T15:28:23.293296314Z" level=info msg="Forcibly stopping sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\"" Feb 13 15:28:23.293454 containerd[1504]: time="2025-02-13T15:28:23.293411750Z" level=info msg="TearDown network for sandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\" successfully" Feb 13 15:28:23.298635 containerd[1504]: time="2025-02-13T15:28:23.298551804Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.298694 containerd[1504]: time="2025-02-13T15:28:23.298650359Z" level=info msg="RemovePodSandbox \"12783466253bd0f785a2cca28bfa6fa489da7c79fb06a7cbd578595f9c845fd1\" returns successfully" Feb 13 15:28:23.299140 containerd[1504]: time="2025-02-13T15:28:23.299110112Z" level=info msg="StopPodSandbox for \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\"" Feb 13 15:28:23.299282 containerd[1504]: time="2025-02-13T15:28:23.299249834Z" level=info msg="TearDown network for sandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\" successfully" Feb 13 15:28:23.299282 containerd[1504]: time="2025-02-13T15:28:23.299270863Z" level=info msg="StopPodSandbox for \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\" returns successfully" Feb 13 15:28:23.300424 containerd[1504]: time="2025-02-13T15:28:23.299577298Z" level=info msg="RemovePodSandbox for \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\"" Feb 13 15:28:23.300424 containerd[1504]: time="2025-02-13T15:28:23.299605010Z" level=info msg="Forcibly stopping sandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\"" Feb 13 15:28:23.300424 containerd[1504]: time="2025-02-13T15:28:23.299678197Z" level=info msg="TearDown network for sandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\" successfully" Feb 13 15:28:23.303382 containerd[1504]: time="2025-02-13T15:28:23.303330389Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.303382 containerd[1504]: time="2025-02-13T15:28:23.303384511Z" level=info msg="RemovePodSandbox \"25d3f58fd7e823716d25d5a5fd12e2a3382b6f0f782712253a6d7aa34921baee\" returns successfully" Feb 13 15:28:23.303656 containerd[1504]: time="2025-02-13T15:28:23.303625693Z" level=info msg="StopPodSandbox for \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\"" Feb 13 15:28:23.303759 containerd[1504]: time="2025-02-13T15:28:23.303736180Z" level=info msg="TearDown network for sandbox \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\" successfully" Feb 13 15:28:23.303759 containerd[1504]: time="2025-02-13T15:28:23.303754214Z" level=info msg="StopPodSandbox for \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\" returns successfully" Feb 13 15:28:23.304132 containerd[1504]: time="2025-02-13T15:28:23.304106666Z" level=info msg="RemovePodSandbox for \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\"" Feb 13 15:28:23.304132 containerd[1504]: time="2025-02-13T15:28:23.304130932Z" level=info msg="Forcibly stopping sandbox \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\"" Feb 13 15:28:23.304244 containerd[1504]: time="2025-02-13T15:28:23.304201504Z" level=info msg="TearDown network for sandbox \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\" successfully" Feb 13 15:28:23.308984 containerd[1504]: time="2025-02-13T15:28:23.308948018Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.309078 containerd[1504]: time="2025-02-13T15:28:23.308992752Z" level=info msg="RemovePodSandbox \"dfcd358dc8e87207b2650ee7b24a84d01f9fadd3d41fd577cdfb174c2b6be4d8\" returns successfully" Feb 13 15:28:23.309271 containerd[1504]: time="2025-02-13T15:28:23.309250316Z" level=info msg="StopPodSandbox for \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\"" Feb 13 15:28:23.309368 containerd[1504]: time="2025-02-13T15:28:23.309337470Z" level=info msg="TearDown network for sandbox \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\" successfully" Feb 13 15:28:23.309442 containerd[1504]: time="2025-02-13T15:28:23.309418462Z" level=info msg="StopPodSandbox for \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\" returns successfully" Feb 13 15:28:23.309694 containerd[1504]: time="2025-02-13T15:28:23.309673820Z" level=info msg="RemovePodSandbox for \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\"" Feb 13 15:28:23.309729 containerd[1504]: time="2025-02-13T15:28:23.309694650Z" level=info msg="Forcibly stopping sandbox \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\"" Feb 13 15:28:23.309804 containerd[1504]: time="2025-02-13T15:28:23.309766073Z" level=info msg="TearDown network for sandbox \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\" successfully" Feb 13 15:28:23.313845 containerd[1504]: time="2025-02-13T15:28:23.313809359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.313926 containerd[1504]: time="2025-02-13T15:28:23.313853953Z" level=info msg="RemovePodSandbox \"9167239940180e10b7a873b4af9632796c01e64e54bc5f43fb280962c8c2360b\" returns successfully" Feb 13 15:28:23.314181 containerd[1504]: time="2025-02-13T15:28:23.314149948Z" level=info msg="StopPodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\"" Feb 13 15:28:23.314274 containerd[1504]: time="2025-02-13T15:28:23.314245337Z" level=info msg="TearDown network for sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" successfully" Feb 13 15:28:23.314274 containerd[1504]: time="2025-02-13T15:28:23.314264743Z" level=info msg="StopPodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" returns successfully" Feb 13 15:28:23.315394 containerd[1504]: time="2025-02-13T15:28:23.314509944Z" level=info msg="RemovePodSandbox for \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\"" Feb 13 15:28:23.315394 containerd[1504]: time="2025-02-13T15:28:23.314542856Z" level=info msg="Forcibly stopping sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\"" Feb 13 15:28:23.315394 containerd[1504]: time="2025-02-13T15:28:23.314625130Z" level=info msg="TearDown network for sandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" successfully" Feb 13 15:28:23.323711 containerd[1504]: time="2025-02-13T15:28:23.323652617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.323796 containerd[1504]: time="2025-02-13T15:28:23.323717348Z" level=info msg="RemovePodSandbox \"9d2cd14720eb1a268121d01177f94e31bc3b956e7ad7f14ea9ebff4c92f320cf\" returns successfully" Feb 13 15:28:23.324213 containerd[1504]: time="2025-02-13T15:28:23.324075650Z" level=info msg="StopPodSandbox for \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\"" Feb 13 15:28:23.324213 containerd[1504]: time="2025-02-13T15:28:23.324177501Z" level=info msg="TearDown network for sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\" successfully" Feb 13 15:28:23.324213 containerd[1504]: time="2025-02-13T15:28:23.324207858Z" level=info msg="StopPodSandbox for \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\" returns successfully" Feb 13 15:28:23.324436 containerd[1504]: time="2025-02-13T15:28:23.324414305Z" level=info msg="RemovePodSandbox for \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\"" Feb 13 15:28:23.324436 containerd[1504]: time="2025-02-13T15:28:23.324433551Z" level=info msg="Forcibly stopping sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\"" Feb 13 15:28:23.324522 containerd[1504]: time="2025-02-13T15:28:23.324493053Z" level=info msg="TearDown network for sandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\" successfully" Feb 13 15:28:23.328540 containerd[1504]: time="2025-02-13T15:28:23.328500631Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.328635 containerd[1504]: time="2025-02-13T15:28:23.328549102Z" level=info msg="RemovePodSandbox \"949e1fe1e59a4b1bfd078f4d5437a5e3f4b940924782a6e2fc041658c3885a42\" returns successfully" Feb 13 15:28:23.328915 containerd[1504]: time="2025-02-13T15:28:23.328876307Z" level=info msg="StopPodSandbox for \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\"" Feb 13 15:28:23.329002 containerd[1504]: time="2025-02-13T15:28:23.328976906Z" level=info msg="TearDown network for sandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\" successfully" Feb 13 15:28:23.329002 containerd[1504]: time="2025-02-13T15:28:23.328993437Z" level=info msg="StopPodSandbox for \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\" returns successfully" Feb 13 15:28:23.329269 containerd[1504]: time="2025-02-13T15:28:23.329224330Z" level=info msg="RemovePodSandbox for \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\"" Feb 13 15:28:23.329269 containerd[1504]: time="2025-02-13T15:28:23.329255979Z" level=info msg="Forcibly stopping sandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\"" Feb 13 15:28:23.329418 containerd[1504]: time="2025-02-13T15:28:23.329375183Z" level=info msg="TearDown network for sandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\" successfully" Feb 13 15:28:23.333505 containerd[1504]: time="2025-02-13T15:28:23.333471057Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.333612 containerd[1504]: time="2025-02-13T15:28:23.333512184Z" level=info msg="RemovePodSandbox \"abdd0e0881b7a80f16084c52a2e6d9de15ed442e0d501ed54a15d569eb45e4f2\" returns successfully" Feb 13 15:28:23.334180 containerd[1504]: time="2025-02-13T15:28:23.333894862Z" level=info msg="StopPodSandbox for \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\"" Feb 13 15:28:23.334180 containerd[1504]: time="2025-02-13T15:28:23.334045013Z" level=info msg="TearDown network for sandbox \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\" successfully" Feb 13 15:28:23.334180 containerd[1504]: time="2025-02-13T15:28:23.334062666Z" level=info msg="StopPodSandbox for \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\" returns successfully" Feb 13 15:28:23.334422 containerd[1504]: time="2025-02-13T15:28:23.334394560Z" level=info msg="RemovePodSandbox for \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\"" Feb 13 15:28:23.334494 containerd[1504]: time="2025-02-13T15:28:23.334423294Z" level=info msg="Forcibly stopping sandbox \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\"" Feb 13 15:28:23.334529 containerd[1504]: time="2025-02-13T15:28:23.334497062Z" level=info msg="TearDown network for sandbox \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\" successfully" Feb 13 15:28:23.363175 containerd[1504]: time="2025-02-13T15:28:23.363121435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.363175 containerd[1504]: time="2025-02-13T15:28:23.363181217Z" level=info msg="RemovePodSandbox \"0c32dc72f89e83bcfcffc7a7630222424e796258fc8adb0ee4c8f30eaa056c3d\" returns successfully" Feb 13 15:28:23.363743 containerd[1504]: time="2025-02-13T15:28:23.363544929Z" level=info msg="StopPodSandbox for \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\"" Feb 13 15:28:23.363743 containerd[1504]: time="2025-02-13T15:28:23.363653122Z" level=info msg="TearDown network for sandbox \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\" successfully" Feb 13 15:28:23.363743 containerd[1504]: time="2025-02-13T15:28:23.363688789Z" level=info msg="StopPodSandbox for \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\" returns successfully" Feb 13 15:28:23.363991 containerd[1504]: time="2025-02-13T15:28:23.363964866Z" level=info msg="RemovePodSandbox for \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\"" Feb 13 15:28:23.364025 containerd[1504]: time="2025-02-13T15:28:23.363993600Z" level=info msg="Forcibly stopping sandbox \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\"" Feb 13 15:28:23.364137 containerd[1504]: time="2025-02-13T15:28:23.364087827Z" level=info msg="TearDown network for sandbox \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\" successfully" Feb 13 15:28:23.437492 containerd[1504]: time="2025-02-13T15:28:23.437426367Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.437492 containerd[1504]: time="2025-02-13T15:28:23.437491751Z" level=info msg="RemovePodSandbox \"dbb3033ee4372086647d2b23e6d6d3d2d928eb4fa8592906fbb313d9f85ff5cb\" returns successfully" Feb 13 15:28:23.437973 containerd[1504]: time="2025-02-13T15:28:23.437917461Z" level=info msg="StopPodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\"" Feb 13 15:28:23.438068 containerd[1504]: time="2025-02-13T15:28:23.438040603Z" level=info msg="TearDown network for sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" successfully" Feb 13 15:28:23.438068 containerd[1504]: time="2025-02-13T15:28:23.438061724Z" level=info msg="StopPodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" returns successfully" Feb 13 15:28:23.438658 containerd[1504]: time="2025-02-13T15:28:23.438618000Z" level=info msg="RemovePodSandbox for \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\"" Feb 13 15:28:23.438658 containerd[1504]: time="2025-02-13T15:28:23.438647787Z" level=info msg="Forcibly stopping sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\"" Feb 13 15:28:23.438777 containerd[1504]: time="2025-02-13T15:28:23.438728609Z" level=info msg="TearDown network for sandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" successfully" Feb 13 15:28:23.446740 containerd[1504]: time="2025-02-13T15:28:23.446677572Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.446836 containerd[1504]: time="2025-02-13T15:28:23.446761259Z" level=info msg="RemovePodSandbox \"20d3b9aa05ca7e6b7733314c8f067771307786f64ac6e6c7f01e3211821a23e2\" returns successfully" Feb 13 15:28:23.447290 containerd[1504]: time="2025-02-13T15:28:23.447196899Z" level=info msg="StopPodSandbox for \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\"" Feb 13 15:28:23.447334 containerd[1504]: time="2025-02-13T15:28:23.447301085Z" level=info msg="TearDown network for sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\" successfully" Feb 13 15:28:23.447334 containerd[1504]: time="2025-02-13T15:28:23.447311775Z" level=info msg="StopPodSandbox for \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\" returns successfully" Feb 13 15:28:23.447771 containerd[1504]: time="2025-02-13T15:28:23.447734811Z" level=info msg="RemovePodSandbox for \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\"" Feb 13 15:28:23.447835 containerd[1504]: time="2025-02-13T15:28:23.447776109Z" level=info msg="Forcibly stopping sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\"" Feb 13 15:28:23.447920 containerd[1504]: time="2025-02-13T15:28:23.447877049Z" level=info msg="TearDown network for sandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\" successfully" Feb 13 15:28:23.454000 containerd[1504]: time="2025-02-13T15:28:23.453962254Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.454076 containerd[1504]: time="2025-02-13T15:28:23.454014752Z" level=info msg="RemovePodSandbox \"0ee8efaf7568a3bd05c899560752119da9161d217d714795a2d217c1ea5ab4c2\" returns successfully" Feb 13 15:28:23.454587 containerd[1504]: time="2025-02-13T15:28:23.454402352Z" level=info msg="StopPodSandbox for \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\"" Feb 13 15:28:23.454587 containerd[1504]: time="2025-02-13T15:28:23.454502361Z" level=info msg="TearDown network for sandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\" successfully" Feb 13 15:28:23.454587 containerd[1504]: time="2025-02-13T15:28:23.454513111Z" level=info msg="StopPodSandbox for \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\" returns successfully" Feb 13 15:28:23.454855 containerd[1504]: time="2025-02-13T15:28:23.454821722Z" level=info msg="RemovePodSandbox for \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\"" Feb 13 15:28:23.454855 containerd[1504]: time="2025-02-13T15:28:23.454852920Z" level=info msg="Forcibly stopping sandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\"" Feb 13 15:28:23.455048 containerd[1504]: time="2025-02-13T15:28:23.454933462Z" level=info msg="TearDown network for sandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\" successfully" Feb 13 15:28:23.472133 containerd[1504]: time="2025-02-13T15:28:23.472104554Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.472194 containerd[1504]: time="2025-02-13T15:28:23.472137275Z" level=info msg="RemovePodSandbox \"e4a628a7e2e583579f32411f878cadb1cb9c8b1ca9ec4577aeaf7c5f134c6fff\" returns successfully" Feb 13 15:28:23.472417 containerd[1504]: time="2025-02-13T15:28:23.472395431Z" level=info msg="StopPodSandbox for \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\"" Feb 13 15:28:23.472496 containerd[1504]: time="2025-02-13T15:28:23.472479229Z" level=info msg="TearDown network for sandbox \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\" successfully" Feb 13 15:28:23.472496 containerd[1504]: time="2025-02-13T15:28:23.472494198Z" level=info msg="StopPodSandbox for \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\" returns successfully" Feb 13 15:28:23.472743 containerd[1504]: time="2025-02-13T15:28:23.472703862Z" level=info msg="RemovePodSandbox for \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\"" Feb 13 15:28:23.472743 containerd[1504]: time="2025-02-13T15:28:23.472722326Z" level=info msg="Forcibly stopping sandbox \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\"" Feb 13 15:28:23.472801 containerd[1504]: time="2025-02-13T15:28:23.472781949Z" level=info msg="TearDown network for sandbox \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\" successfully" Feb 13 15:28:23.509448 containerd[1504]: time="2025-02-13T15:28:23.509384357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.509448 containerd[1504]: time="2025-02-13T15:28:23.509457985Z" level=info msg="RemovePodSandbox \"1719e63b7cbafc18a217ed1a914eb32695db60d4e4c3a05fbf4b8ad6c0811a8b\" returns successfully" Feb 13 15:28:23.509792 containerd[1504]: time="2025-02-13T15:28:23.509756197Z" level=info msg="StopPodSandbox for \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\"" Feb 13 15:28:23.509933 containerd[1504]: time="2025-02-13T15:28:23.509876183Z" level=info msg="TearDown network for sandbox \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\" successfully" Feb 13 15:28:23.509933 containerd[1504]: time="2025-02-13T15:28:23.509890610Z" level=info msg="StopPodSandbox for \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\" returns successfully" Feb 13 15:28:23.512148 containerd[1504]: time="2025-02-13T15:28:23.510241000Z" level=info msg="RemovePodSandbox for \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\"" Feb 13 15:28:23.512148 containerd[1504]: time="2025-02-13T15:28:23.510263662Z" level=info msg="Forcibly stopping sandbox \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\"" Feb 13 15:28:23.512148 containerd[1504]: time="2025-02-13T15:28:23.510332241Z" level=info msg="TearDown network for sandbox \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\" successfully" Feb 13 15:28:23.529107 containerd[1504]: time="2025-02-13T15:28:23.529072167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:23.529156 containerd[1504]: time="2025-02-13T15:28:23.529113414Z" level=info msg="RemovePodSandbox \"33e257faa75c7010a7172daba8550496a84fa90f1f42e9eb84a28bede1898919\" returns successfully" Feb 13 15:28:26.025395 systemd[1]: Started sshd@15-10.0.0.70:22-10.0.0.1:43202.service - OpenSSH per-connection server daemon (10.0.0.1:43202). Feb 13 15:28:26.072925 sshd[5823]: Accepted publickey for core from 10.0.0.1 port 43202 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:26.074633 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:26.079450 systemd-logind[1483]: New session 16 of user core. Feb 13 15:28:26.087553 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:28:26.208544 sshd[5825]: Connection closed by 10.0.0.1 port 43202 Feb 13 15:28:26.209103 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:26.214282 systemd[1]: sshd@15-10.0.0.70:22-10.0.0.1:43202.service: Deactivated successfully. Feb 13 15:28:26.216665 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:28:26.217418 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:28:26.218497 systemd-logind[1483]: Removed session 16. Feb 13 15:28:29.203710 kubelet[2675]: E0213 15:28:29.203651 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:31.222919 systemd[1]: Started sshd@16-10.0.0.70:22-10.0.0.1:43214.service - OpenSSH per-connection server daemon (10.0.0.1:43214). Feb 13 15:28:31.267798 sshd[5839]: Accepted publickey for core from 10.0.0.1 port 43214 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:31.269559 sshd-session[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:31.273857 systemd-logind[1483]: New session 17 of user core. Feb 13 15:28:31.291501 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:28:31.437955 sshd[5841]: Connection closed by 10.0.0.1 port 43214 Feb 13 15:28:31.438339 sshd-session[5839]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:31.449095 systemd[1]: sshd@16-10.0.0.70:22-10.0.0.1:43214.service: Deactivated successfully. Feb 13 15:28:31.451996 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:28:31.454267 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:28:31.464678 systemd[1]: Started sshd@17-10.0.0.70:22-10.0.0.1:43230.service - OpenSSH per-connection server daemon (10.0.0.1:43230). Feb 13 15:28:31.465777 systemd-logind[1483]: Removed session 17. Feb 13 15:28:31.496919 sshd[5853]: Accepted publickey for core from 10.0.0.1 port 43230 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:31.498664 sshd-session[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:31.503275 systemd-logind[1483]: New session 18 of user core. Feb 13 15:28:31.517562 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:28:32.123591 sshd[5855]: Connection closed by 10.0.0.1 port 43230 Feb 13 15:28:32.124089 sshd-session[5853]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:32.138313 systemd[1]: sshd@17-10.0.0.70:22-10.0.0.1:43230.service: Deactivated successfully. Feb 13 15:28:32.140308 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:28:32.142006 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:28:32.149588 systemd[1]: Started sshd@18-10.0.0.70:22-10.0.0.1:43242.service - OpenSSH per-connection server daemon (10.0.0.1:43242). Feb 13 15:28:32.150466 systemd-logind[1483]: Removed session 18. Feb 13 15:28:32.184858 sshd[5866]: Accepted publickey for core from 10.0.0.1 port 43242 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:32.186256 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:32.191002 systemd-logind[1483]: New session 19 of user core. Feb 13 15:28:32.209604 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:28:34.400661 sshd[5868]: Connection closed by 10.0.0.1 port 43242 Feb 13 15:28:34.401322 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:34.414399 systemd[1]: sshd@18-10.0.0.70:22-10.0.0.1:43242.service: Deactivated successfully. Feb 13 15:28:34.416645 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:28:34.418452 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:28:34.426676 systemd[1]: Started sshd@19-10.0.0.70:22-10.0.0.1:43258.service - OpenSSH per-connection server daemon (10.0.0.1:43258). Feb 13 15:28:34.427923 systemd-logind[1483]: Removed session 19. Feb 13 15:28:34.461552 sshd[5889]: Accepted publickey for core from 10.0.0.1 port 43258 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:34.463368 sshd-session[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:34.467810 systemd-logind[1483]: New session 20 of user core. Feb 13 15:28:34.478474 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:28:34.857128 sshd[5891]: Connection closed by 10.0.0.1 port 43258 Feb 13 15:28:34.857450 sshd-session[5889]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:34.866205 systemd[1]: sshd@19-10.0.0.70:22-10.0.0.1:43258.service: Deactivated successfully. Feb 13 15:28:34.868532 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:28:34.870385 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:28:34.880095 systemd[1]: Started sshd@20-10.0.0.70:22-10.0.0.1:42198.service - OpenSSH per-connection server daemon (10.0.0.1:42198). Feb 13 15:28:34.881304 systemd-logind[1483]: Removed session 20. Feb 13 15:28:34.915282 sshd[5901]: Accepted publickey for core from 10.0.0.1 port 42198 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:34.917173 sshd-session[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:34.921511 systemd-logind[1483]: New session 21 of user core. Feb 13 15:28:34.928502 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:28:35.042892 sshd[5903]: Connection closed by 10.0.0.1 port 42198 Feb 13 15:28:35.043327 sshd-session[5901]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:35.048490 systemd[1]: sshd@20-10.0.0.70:22-10.0.0.1:42198.service: Deactivated successfully. Feb 13 15:28:35.051520 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:28:35.052258 systemd-logind[1483]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:28:35.053162 systemd-logind[1483]: Removed session 21. Feb 13 15:28:35.204224 kubelet[2675]: E0213 15:28:35.204183 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:40.057095 systemd[1]: Started sshd@21-10.0.0.70:22-10.0.0.1:42208.service - OpenSSH per-connection server daemon (10.0.0.1:42208). Feb 13 15:28:40.100785 sshd[5917]: Accepted publickey for core from 10.0.0.1 port 42208 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:40.102476 sshd-session[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:40.107055 systemd-logind[1483]: New session 22 of user core. Feb 13 15:28:40.115597 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:28:40.262931 sshd[5919]: Connection closed by 10.0.0.1 port 42208 Feb 13 15:28:40.263504 sshd-session[5917]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:40.268333 systemd[1]: sshd@21-10.0.0.70:22-10.0.0.1:42208.service: Deactivated successfully. Feb 13 15:28:40.270540 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:28:40.271216 systemd-logind[1483]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:28:40.272361 systemd-logind[1483]: Removed session 22. Feb 13 15:28:41.584482 kubelet[2675]: I0213 15:28:41.584422 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:45.280710 systemd[1]: Started sshd@22-10.0.0.70:22-10.0.0.1:60706.service - OpenSSH per-connection server daemon (10.0.0.1:60706). Feb 13 15:28:45.318541 sshd[5942]: Accepted publickey for core from 10.0.0.1 port 60706 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:45.320137 sshd-session[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:45.324190 systemd-logind[1483]: New session 23 of user core. Feb 13 15:28:45.335517 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:28:45.459813 sshd[5944]: Connection closed by 10.0.0.1 port 60706 Feb 13 15:28:45.460266 sshd-session[5942]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:45.463785 systemd[1]: sshd@22-10.0.0.70:22-10.0.0.1:60706.service: Deactivated successfully. Feb 13 15:28:45.466580 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:28:45.469412 systemd-logind[1483]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:28:45.470727 systemd-logind[1483]: Removed session 23. Feb 13 15:28:45.931077 kubelet[2675]: I0213 15:28:45.931025 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:47.203943 kubelet[2675]: E0213 15:28:47.203902 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:50.472366 systemd[1]: Started sshd@23-10.0.0.70:22-10.0.0.1:60714.service - OpenSSH per-connection server daemon (10.0.0.1:60714). Feb 13 15:28:50.534136 sshd[6001]: Accepted publickey for core from 10.0.0.1 port 60714 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:50.536889 sshd-session[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:50.541926 systemd-logind[1483]: New session 24 of user core. Feb 13 15:28:50.551550 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:28:50.704689 sshd[6003]: Connection closed by 10.0.0.1 port 60714 Feb 13 15:28:50.705135 sshd-session[6001]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:50.710960 systemd[1]: sshd@23-10.0.0.70:22-10.0.0.1:60714.service: Deactivated successfully. Feb 13 15:28:50.713982 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:28:50.714864 systemd-logind[1483]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:28:50.716435 systemd-logind[1483]: Removed session 24. Feb 13 15:28:55.716678 systemd[1]: Started sshd@24-10.0.0.70:22-10.0.0.1:40918.service - OpenSSH per-connection server daemon (10.0.0.1:40918). Feb 13 15:28:55.754770 sshd[6015]: Accepted publickey for core from 10.0.0.1 port 40918 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:28:55.756291 sshd-session[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:55.760597 systemd-logind[1483]: New session 25 of user core. Feb 13 15:28:55.774474 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:28:55.888962 sshd[6017]: Connection closed by 10.0.0.1 port 40918 Feb 13 15:28:55.889384 sshd-session[6015]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:55.892943 systemd[1]: sshd@24-10.0.0.70:22-10.0.0.1:40918.service: Deactivated successfully. Feb 13 15:28:55.895047 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:28:55.897232 systemd-logind[1483]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:28:55.898391 systemd-logind[1483]: Removed session 25. Feb 13 15:28:57.203726 kubelet[2675]: E0213 15:28:57.203672 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:29:00.905538 systemd[1]: Started sshd@25-10.0.0.70:22-10.0.0.1:40922.service - OpenSSH per-connection server daemon (10.0.0.1:40922). Feb 13 15:29:00.975576 sshd[6031]: Accepted publickey for core from 10.0.0.1 port 40922 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:29:00.977253 sshd-session[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:29:00.981417 systemd-logind[1483]: New session 26 of user core. Feb 13 15:29:00.989499 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:29:01.100461 sshd[6033]: Connection closed by 10.0.0.1 port 40922 Feb 13 15:29:01.100864 sshd-session[6031]: pam_unix(sshd:session): session closed for user core Feb 13 15:29:01.104642 systemd[1]: sshd@25-10.0.0.70:22-10.0.0.1:40922.service: Deactivated successfully. Feb 13 15:29:01.106599 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:29:01.107265 systemd-logind[1483]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:29:01.108170 systemd-logind[1483]: Removed session 26. Feb 13 15:29:02.203537 kubelet[2675]: E0213 15:29:02.203497 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"