Jan 29 11:22:22.901034 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:22:22.901072 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:22:22.901086 kernel: BIOS-provided physical RAM map: Jan 29 11:22:22.901094 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:22:22.901111 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 11:22:22.901119 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 11:22:22.901130 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 11:22:22.901162 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 11:22:22.901179 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 29 11:22:22.901196 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 29 11:22:22.901219 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 29 11:22:22.901236 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 29 11:22:22.901252 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 29 11:22:22.901269 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 29 11:22:22.901295 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 29 11:22:22.901307 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 11:22:22.901334 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 29 11:22:22.901343 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 29 11:22:22.901352 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 29 11:22:22.901360 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 29 11:22:22.901369 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 29 11:22:22.901378 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 11:22:22.901387 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 29 11:22:22.901395 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:22:22.901404 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 29 11:22:22.901422 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:22:22.901432 kernel: NX (Execute Disable) protection: active Jan 29 11:22:22.901445 kernel: APIC: Static calls initialized Jan 29 11:22:22.901453 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 29 11:22:22.901462 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 29 11:22:22.901471 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 29 11:22:22.901479 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 29 11:22:22.901488 kernel: extended physical RAM map: Jan 29 11:22:22.901497 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:22:22.901506 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 11:22:22.901515 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 11:22:22.901525 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 11:22:22.901534 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 11:22:22.901547 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 29 11:22:22.901556 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 29 11:22:22.901570 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jan 29 11:22:22.901579 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jan 29 11:22:22.901588 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jan 29 11:22:22.901597 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jan 29 11:22:22.901606 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jan 29 11:22:22.901619 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 29 11:22:22.901628 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 29 11:22:22.901637 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 29 11:22:22.901646 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 29 11:22:22.901655 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 11:22:22.901665 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 29 11:22:22.901674 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 29 11:22:22.901684 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 29 11:22:22.901693 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 29 11:22:22.901706 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 29 11:22:22.901716 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 11:22:22.901725 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 29 11:22:22.901735 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:22:22.901744 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 29 11:22:22.901754 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:22:22.901763 kernel: efi: EFI v2.7 by EDK II Jan 29 11:22:22.901773 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jan 29 11:22:22.901782 kernel: random: crng init done Jan 29 11:22:22.901792 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 29 11:22:22.901801 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 29 11:22:22.901814 kernel: secureboot: Secure boot disabled Jan 29 11:22:22.901823 kernel: SMBIOS 2.8 present. Jan 29 11:22:22.901833 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 29 11:22:22.901842 kernel: Hypervisor detected: KVM Jan 29 11:22:22.901852 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:22:22.901861 kernel: kvm-clock: using sched offset of 2556198308 cycles Jan 29 11:22:22.901871 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:22:22.901881 kernel: tsc: Detected 2794.748 MHz processor Jan 29 11:22:22.901891 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:22:22.901902 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:22:22.901911 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 29 11:22:22.901925 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 11:22:22.901935 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:22:22.901944 kernel: Using GB pages for direct mapping Jan 29 11:22:22.901954 kernel: ACPI: Early table checksum verification disabled Jan 29 11:22:22.901964 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 29 11:22:22.901974 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:22:22.901984 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:22:22.901994 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:22:22.902003 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 29 11:22:22.902016 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:22:22.902026 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:22:22.902036 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:22:22.902046 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:22:22.902055 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:22:22.902065 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 29 11:22:22.902074 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 29 11:22:22.902084 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 29 11:22:22.902094 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 29 11:22:22.902107 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 29 11:22:22.902117 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 29 11:22:22.902126 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 29 11:22:22.902163 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 29 11:22:22.902173 kernel: No NUMA configuration found Jan 29 11:22:22.902183 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 29 11:22:22.902193 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jan 29 11:22:22.902203 kernel: Zone ranges: Jan 29 11:22:22.902213 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:22:22.902227 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 29 11:22:22.902237 kernel: Normal empty Jan 29 11:22:22.902246 kernel: Movable zone start for each node Jan 29 11:22:22.902255 kernel: Early memory node ranges Jan 29 11:22:22.902265 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 11:22:22.902274 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 29 11:22:22.902283 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 29 11:22:22.902293 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 29 11:22:22.902302 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 29 11:22:22.902315 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 29 11:22:22.902325 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jan 29 11:22:22.902334 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jan 29 11:22:22.902343 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 29 11:22:22.902353 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:22:22.902363 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 11:22:22.902383 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 29 11:22:22.902396 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:22:22.902406 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 29 11:22:22.902426 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 29 11:22:22.902437 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 29 11:22:22.902448 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 29 11:22:22.902458 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 29 11:22:22.902473 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:22:22.902484 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:22:22.902495 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:22:22.902505 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:22:22.902518 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:22:22.902528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:22:22.902539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:22:22.902549 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:22:22.902560 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:22:22.902571 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:22:22.902582 kernel: TSC deadline timer available Jan 29 11:22:22.902594 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:22:22.902605 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:22:22.902616 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:22:22.902629 kernel: kvm-guest: setup PV sched yield Jan 29 11:22:22.902639 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 29 11:22:22.902650 kernel: Booting paravirtualized kernel on KVM Jan 29 11:22:22.902661 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:22:22.902672 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:22:22.902682 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:22:22.902692 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:22:22.902702 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:22:22.902712 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:22:22.902726 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:22:22.902737 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:22:22.902748 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:22:22.902758 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:22:22.902769 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:22:22.902779 kernel: Fallback order for Node 0: 0 Jan 29 11:22:22.902788 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jan 29 11:22:22.902799 kernel: Policy zone: DMA32 Jan 29 11:22:22.902811 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:22:22.902822 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 175776K reserved, 0K cma-reserved) Jan 29 11:22:22.902833 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:22:22.902843 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:22:22.902852 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:22:22.902862 kernel: Dynamic Preempt: voluntary Jan 29 11:22:22.902873 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:22:22.902883 kernel: rcu: RCU event tracing is enabled. Jan 29 11:22:22.902894 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:22:22.902907 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:22:22.902922 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:22:22.902942 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:22:22.902959 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:22:22.902978 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:22:22.902994 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:22:22.903014 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:22:22.903030 kernel: Console: colour dummy device 80x25 Jan 29 11:22:22.903049 kernel: printk: console [ttyS0] enabled Jan 29 11:22:22.903071 kernel: ACPI: Core revision 20230628 Jan 29 11:22:22.903090 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:22:22.903107 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:22:22.903117 kernel: x2apic enabled Jan 29 11:22:22.903127 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:22:22.903162 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:22:22.903173 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:22:22.903183 kernel: kvm-guest: setup PV IPIs Jan 29 11:22:22.903194 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:22:22.903208 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:22:22.903218 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 29 11:22:22.903229 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:22:22.903239 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:22:22.903249 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:22:22.903258 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:22:22.903268 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:22:22.903279 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:22:22.903289 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:22:22.903302 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:22:22.903311 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:22:22.903321 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:22:22.903332 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:22:22.903342 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:22:22.903353 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:22:22.903364 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:22:22.903374 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:22:22.903395 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:22:22.903420 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:22:22.903441 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:22:22.903452 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:22:22.903462 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:22:22.903472 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:22:22.903482 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:22:22.903492 kernel: landlock: Up and running. Jan 29 11:22:22.903502 kernel: SELinux: Initializing. Jan 29 11:22:22.903512 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:22:22.903526 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:22:22.903536 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:22:22.903546 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:22:22.903556 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:22:22.903567 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:22:22.903577 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:22:22.903587 kernel: ... version: 0 Jan 29 11:22:22.903598 kernel: ... bit width: 48 Jan 29 11:22:22.903611 kernel: ... generic registers: 6 Jan 29 11:22:22.903621 kernel: ... value mask: 0000ffffffffffff Jan 29 11:22:22.903631 kernel: ... max period: 00007fffffffffff Jan 29 11:22:22.903641 kernel: ... fixed-purpose events: 0 Jan 29 11:22:22.903650 kernel: ... event mask: 000000000000003f Jan 29 11:22:22.903660 kernel: signal: max sigframe size: 1776 Jan 29 11:22:22.903670 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:22:22.903681 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:22:22.903692 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:22:22.903702 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:22:22.903715 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:22:22.903726 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:22:22.903736 kernel: smpboot: Max logical packages: 1 Jan 29 11:22:22.903746 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 29 11:22:22.903756 kernel: devtmpfs: initialized Jan 29 11:22:22.903767 kernel: x86/mm: Memory block size: 128MB Jan 29 11:22:22.903777 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 29 11:22:22.903788 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 29 11:22:22.903799 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 29 11:22:22.903813 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 29 11:22:22.903824 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jan 29 11:22:22.903834 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 29 11:22:22.903844 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:22:22.903854 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:22:22.903865 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:22:22.903875 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:22:22.903885 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:22:22.903899 kernel: audit: type=2000 audit(1738149742.759:1): state=initialized audit_enabled=0 res=1 Jan 29 11:22:22.903909 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:22:22.903919 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:22:22.903928 kernel: cpuidle: using governor menu Jan 29 11:22:22.903938 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:22:22.903948 kernel: dca service started, version 1.12.1 Jan 29 11:22:22.903958 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 29 11:22:22.903967 kernel: PCI: Using configuration type 1 for base access Jan 29 11:22:22.903977 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:22:22.903989 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:22:22.903999 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:22:22.904009 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:22:22.904019 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:22:22.904029 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:22:22.904039 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:22:22.904050 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:22:22.904060 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:22:22.904070 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:22:22.904084 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:22:22.904094 kernel: ACPI: Interpreter enabled Jan 29 11:22:22.904104 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:22:22.904114 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:22:22.904124 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:22:22.904157 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:22:22.904168 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:22:22.904178 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:22:22.904399 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:22:22.904583 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:22:22.904739 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:22:22.904753 kernel: PCI host bridge to bus 0000:00 Jan 29 11:22:22.904944 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:22:22.905160 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:22:22.905310 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:22:22.905467 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 29 11:22:22.905606 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 29 11:22:22.905738 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 29 11:22:22.905883 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:22:22.906119 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:22:22.906316 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:22:22.906487 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 29 11:22:22.906654 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 29 11:22:22.906817 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 29 11:22:22.906982 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 29 11:22:22.907171 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:22:22.907342 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:22:22.907512 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 29 11:22:22.907665 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 29 11:22:22.907834 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jan 29 11:22:22.908000 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:22:22.908201 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 29 11:22:22.908366 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 29 11:22:22.908525 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jan 29 11:22:22.908692 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:22:22.908849 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 29 11:22:22.908998 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 29 11:22:22.909183 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 29 11:22:22.909355 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 29 11:22:22.909538 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:22:22.909693 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:22:22.909861 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:22:22.910021 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 29 11:22:22.910236 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 29 11:22:22.910406 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:22:22.910574 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 29 11:22:22.910590 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:22:22.910600 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:22:22.910611 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:22:22.910621 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:22:22.910636 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:22:22.910647 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:22:22.910656 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:22:22.910666 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:22:22.910676 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:22:22.910686 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:22:22.910695 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:22:22.910705 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:22:22.910715 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:22:22.910728 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:22:22.910738 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:22:22.910748 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:22:22.910758 kernel: iommu: Default domain type: Translated Jan 29 11:22:22.910766 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:22:22.910774 kernel: efivars: Registered efivars operations Jan 29 11:22:22.910781 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:22:22.910789 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:22:22.910796 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 29 11:22:22.910807 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 29 11:22:22.910814 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jan 29 11:22:22.910822 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jan 29 11:22:22.910829 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 29 11:22:22.910837 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 29 11:22:22.910844 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jan 29 11:22:22.910852 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 29 11:22:22.910978 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:22:22.911148 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:22:22.911312 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:22:22.911327 kernel: vgaarb: loaded Jan 29 11:22:22.911338 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:22:22.911348 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:22:22.911358 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:22:22.911369 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:22:22.911380 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:22:22.911390 kernel: pnp: PnP ACPI init Jan 29 11:22:22.911581 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 29 11:22:22.911597 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:22:22.911608 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:22:22.911619 kernel: NET: Registered PF_INET protocol family Jan 29 11:22:22.911651 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:22:22.911666 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:22:22.911678 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:22:22.911690 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:22:22.911705 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:22:22.911716 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:22:22.911727 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:22:22.911738 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:22:22.911749 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:22:22.911760 kernel: NET: Registered PF_XDP protocol family Jan 29 11:22:22.911922 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 29 11:22:22.912084 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 29 11:22:22.912261 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:22:22.912415 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:22:22.912572 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:22:22.912712 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 29 11:22:22.912859 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 29 11:22:22.913004 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 29 11:22:22.913020 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:22:22.913032 kernel: Initialise system trusted keyrings Jan 29 11:22:22.913048 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:22:22.913059 kernel: Key type asymmetric registered Jan 29 11:22:22.913071 kernel: Asymmetric key parser 'x509' registered Jan 29 11:22:22.913082 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:22:22.913093 kernel: io scheduler mq-deadline registered Jan 29 11:22:22.913103 kernel: io scheduler kyber registered Jan 29 11:22:22.913114 kernel: io scheduler bfq registered Jan 29 11:22:22.913124 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:22:22.913151 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:22:22.913165 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:22:22.913178 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:22:22.913188 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:22:22.913199 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:22:22.913210 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:22:22.913222 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:22:22.913236 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:22:22.913247 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:22:22.913420 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:22:22.913569 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:22:22.913715 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:22:22 UTC (1738149742) Jan 29 11:22:22.913861 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 11:22:22.913876 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:22:22.913887 kernel: efifb: probing for efifb Jan 29 11:22:22.913902 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 29 11:22:22.913913 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 29 11:22:22.913925 kernel: efifb: scrolling: redraw Jan 29 11:22:22.913936 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 11:22:22.913946 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 11:22:22.913957 kernel: fb0: EFI VGA frame buffer device Jan 29 11:22:22.913967 kernel: pstore: Using crash dump compression: deflate Jan 29 11:22:22.913978 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 11:22:22.913989 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:22:22.914003 kernel: Segment Routing with IPv6 Jan 29 11:22:22.914013 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:22:22.914023 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:22:22.914034 kernel: Key type dns_resolver registered Jan 29 11:22:22.914044 kernel: IPI shorthand broadcast: enabled Jan 29 11:22:22.914055 kernel: sched_clock: Marking stable (601003683, 151602732)->(771473789, -18867374) Jan 29 11:22:22.914066 kernel: registered taskstats version 1 Jan 29 11:22:22.914077 kernel: Loading compiled-in X.509 certificates Jan 29 11:22:22.914087 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:22:22.914101 kernel: Key type .fscrypt registered Jan 29 11:22:22.914111 kernel: Key type fscrypt-provisioning registered Jan 29 11:22:22.914122 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:22:22.914148 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:22:22.914159 kernel: ima: No architecture policies found Jan 29 11:22:22.914170 kernel: clk: Disabling unused clocks Jan 29 11:22:22.914181 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:22:22.914192 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:22:22.914203 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:22:22.914220 kernel: Run /init as init process Jan 29 11:22:22.914231 kernel: with arguments: Jan 29 11:22:22.914242 kernel: /init Jan 29 11:22:22.914252 kernel: with environment: Jan 29 11:22:22.914263 kernel: HOME=/ Jan 29 11:22:22.914273 kernel: TERM=linux Jan 29 11:22:22.914284 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:22:22.914297 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:22:22.914314 systemd[1]: Detected virtualization kvm. Jan 29 11:22:22.914326 systemd[1]: Detected architecture x86-64. Jan 29 11:22:22.914337 systemd[1]: Running in initrd. Jan 29 11:22:22.914348 systemd[1]: No hostname configured, using default hostname. Jan 29 11:22:22.914359 systemd[1]: Hostname set to . Jan 29 11:22:22.914371 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:22:22.914382 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:22:22.914394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:22:22.914418 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:22:22.914430 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:22:22.914442 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:22:22.914454 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:22:22.914466 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:22:22.914479 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:22:22.914490 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:22:22.914506 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:22:22.914518 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:22:22.914528 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:22:22.914539 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:22:22.914550 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:22:22.914562 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:22:22.914573 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:22:22.914585 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:22:22.914599 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:22:22.914610 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:22:22.914622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:22:22.914633 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:22:22.914644 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:22:22.914656 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:22:22.914666 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:22:22.914677 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:22:22.914689 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:22:22.914704 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:22:22.914715 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:22:22.914726 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:22:22.914737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:22:22.914748 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:22:22.914760 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:22:22.914771 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:22:22.914813 systemd-journald[194]: Collecting audit messages is disabled. Jan 29 11:22:22.914846 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:22:22.914858 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:22:22.914870 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:22:22.914881 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:22:22.914893 systemd-journald[194]: Journal started Jan 29 11:22:22.914917 systemd-journald[194]: Runtime Journal (/run/log/journal/5aca52f2b5324630a4d682542919a318) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:22:22.903518 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 11:22:22.919927 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:22:22.919952 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:22:22.924034 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:22:22.935159 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:22:22.936528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:22:22.939797 kernel: Bridge firewalling registered Jan 29 11:22:22.938444 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 11:22:22.941167 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:22:22.943584 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:22:22.946314 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:22:22.961274 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:22:22.963885 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:22:22.975413 dracut-cmdline[223]: dracut-dracut-053 Jan 29 11:22:22.976471 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:22:22.978637 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:22:22.992263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:22:23.025461 systemd-resolved[247]: Positive Trust Anchors: Jan 29 11:22:23.025479 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:22:23.025511 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:22:23.028180 systemd-resolved[247]: Defaulting to hostname 'linux'. Jan 29 11:22:23.029372 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:22:23.034209 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:22:23.078170 kernel: SCSI subsystem initialized Jan 29 11:22:23.087163 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:22:23.097164 kernel: iscsi: registered transport (tcp) Jan 29 11:22:23.118510 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:22:23.118546 kernel: QLogic iSCSI HBA Driver Jan 29 11:22:23.171185 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:22:23.187280 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:22:23.213153 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:22:23.213183 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:22:23.215170 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:22:23.256158 kernel: raid6: avx2x4 gen() 29522 MB/s Jan 29 11:22:23.273154 kernel: raid6: avx2x2 gen() 24405 MB/s Jan 29 11:22:23.290455 kernel: raid6: avx2x1 gen() 14756 MB/s Jan 29 11:22:23.290478 kernel: raid6: using algorithm avx2x4 gen() 29522 MB/s Jan 29 11:22:23.308254 kernel: raid6: .... xor() 6931 MB/s, rmw enabled Jan 29 11:22:23.308268 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:22:23.328165 kernel: xor: automatically using best checksumming function avx Jan 29 11:22:23.477166 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:22:23.490994 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:22:23.498298 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:22:23.515279 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 29 11:22:23.520910 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:22:23.528246 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:22:23.541807 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jan 29 11:22:23.573234 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:22:23.580292 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:22:23.639908 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:22:23.649506 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:22:23.659868 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:22:23.661826 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:22:23.665118 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:22:23.666860 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:22:23.675756 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:22:23.699508 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:22:23.699525 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:22:23.699666 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:22:23.699678 kernel: GPT:9289727 != 19775487 Jan 29 11:22:23.699695 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:22:23.699705 kernel: GPT:9289727 != 19775487 Jan 29 11:22:23.699715 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:22:23.699725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:22:23.699735 kernel: libata version 3.00 loaded. Jan 29 11:22:23.674617 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:22:23.685975 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:22:23.701840 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:22:23.701964 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:22:23.706830 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:22:23.706900 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:22:23.707064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:22:23.716000 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:22:23.716031 kernel: AES CTR mode by8 optimization enabled Jan 29 11:22:23.715979 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:22:23.722486 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:22:23.736869 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:22:23.736885 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:22:23.737039 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:22:23.737284 kernel: scsi host0: ahci Jan 29 11:22:23.737446 kernel: scsi host1: ahci Jan 29 11:22:23.737592 kernel: scsi host2: ahci Jan 29 11:22:23.737736 kernel: scsi host3: ahci Jan 29 11:22:23.737884 kernel: scsi host4: ahci Jan 29 11:22:23.738028 kernel: scsi host5: ahci Jan 29 11:22:23.738208 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 29 11:22:23.738221 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 29 11:22:23.738231 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 29 11:22:23.738241 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 29 11:22:23.738252 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 29 11:22:23.738262 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 29 11:22:23.726855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:22:23.741155 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (464) Jan 29 11:22:23.743318 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:22:23.746460 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (475) Jan 29 11:22:23.749604 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:22:23.758323 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:22:23.771939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:22:23.775739 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:22:23.775814 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:22:23.791252 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:22:23.793444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:22:23.793502 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:22:23.793740 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:22:23.794614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:22:23.809074 disk-uuid[557]: Primary Header is updated. Jan 29 11:22:23.809074 disk-uuid[557]: Secondary Entries is updated. Jan 29 11:22:23.809074 disk-uuid[557]: Secondary Header is updated. Jan 29 11:22:23.813532 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:22:23.814409 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:22:23.830290 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:22:23.860564 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:22:24.044160 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:22:24.044195 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:22:24.044213 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:22:24.045151 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:22:24.052152 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:22:24.052171 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:22:24.053317 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:22:24.053329 kernel: ata3.00: applying bridge limits Jan 29 11:22:24.054150 kernel: ata3.00: configured for UDMA/100 Jan 29 11:22:24.055149 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:22:24.097683 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:22:24.110775 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:22:24.110788 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:22:24.822807 disk-uuid[559]: The operation has completed successfully. Jan 29 11:22:24.824653 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:22:24.850459 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:22:24.850587 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:22:24.875340 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:22:24.880188 sh[597]: Success Jan 29 11:22:24.892349 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:22:24.921771 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:22:24.939493 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:22:24.944222 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:22:24.954153 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:22:24.954181 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:22:24.954192 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:22:24.954209 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:22:24.955485 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:22:24.959264 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:22:24.960896 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:22:24.975326 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:22:24.977994 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:22:24.985152 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:22:24.985187 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:22:24.985204 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:22:24.987157 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:22:24.996041 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:22:24.997732 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:22:25.007858 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:22:25.016292 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:22:25.069176 ignition[683]: Ignition 2.20.0 Jan 29 11:22:25.069188 ignition[683]: Stage: fetch-offline Jan 29 11:22:25.069220 ignition[683]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:22:25.069230 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:22:25.069313 ignition[683]: parsed url from cmdline: "" Jan 29 11:22:25.069317 ignition[683]: no config URL provided Jan 29 11:22:25.069321 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:22:25.069330 ignition[683]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:22:25.069367 ignition[683]: op(1): [started] loading QEMU firmware config module Jan 29 11:22:25.069372 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:22:25.081624 ignition[683]: op(1): [finished] loading QEMU firmware config module Jan 29 11:22:25.094902 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:22:25.105260 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:22:25.124591 ignition[683]: parsing config with SHA512: 854c576e2cf90359f4a459746194bf48c6bf538f21ce8a4b862690ba8252a90b9170e8f4a95b4eaca7019939f899c5723fc4cd0b9ca1f9269ffc452bb152c15f Jan 29 11:22:25.126676 systemd-networkd[787]: lo: Link UP Jan 29 11:22:25.126685 systemd-networkd[787]: lo: Gained carrier Jan 29 11:22:25.128267 unknown[683]: fetched base config from "system" Jan 29 11:22:25.128275 unknown[683]: fetched user config from "qemu" Jan 29 11:22:25.129394 ignition[683]: fetch-offline: fetch-offline passed Jan 29 11:22:25.128286 systemd-networkd[787]: Enumeration completed Jan 29 11:22:25.129491 ignition[683]: Ignition finished successfully Jan 29 11:22:25.128770 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:22:25.128774 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:22:25.129556 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:22:25.130678 systemd-networkd[787]: eth0: Link UP Jan 29 11:22:25.130682 systemd-networkd[787]: eth0: Gained carrier Jan 29 11:22:25.130694 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:22:25.131650 systemd[1]: Reached target network.target - Network. Jan 29 11:22:25.133460 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:22:25.135802 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:22:25.144219 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:22:25.144316 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:22:25.156895 ignition[790]: Ignition 2.20.0 Jan 29 11:22:25.156907 ignition[790]: Stage: kargs Jan 29 11:22:25.157046 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:22:25.157056 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:22:25.157851 ignition[790]: kargs: kargs passed Jan 29 11:22:25.157891 ignition[790]: Ignition finished successfully Jan 29 11:22:25.161085 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:22:25.175260 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:22:25.186852 ignition[800]: Ignition 2.20.0 Jan 29 11:22:25.186862 ignition[800]: Stage: disks Jan 29 11:22:25.187008 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:22:25.187018 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:22:25.189792 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:22:25.187791 ignition[800]: disks: disks passed Jan 29 11:22:25.191588 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:22:25.187832 ignition[800]: Ignition finished successfully Jan 29 11:22:25.193492 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:22:25.195366 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:22:25.197444 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:22:25.197498 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:22:25.205263 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:22:25.217002 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:22:25.223288 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:22:25.231208 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:22:25.313152 kernel: EXT4-fs (vda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:22:25.313538 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:22:25.315611 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:22:25.327206 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:22:25.329064 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:22:25.331243 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:22:25.331286 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:22:25.341245 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (819) Jan 29 11:22:25.341260 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:22:25.341272 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:22:25.341282 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:22:25.341292 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:22:25.331306 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:22:25.342547 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:22:25.357978 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:22:25.358857 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:22:25.393444 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:22:25.397991 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:22:25.401804 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:22:25.405700 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:22:25.487549 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:22:25.495287 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:22:25.497047 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:22:25.503166 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:22:25.520552 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:22:25.522631 ignition[933]: INFO : Ignition 2.20.0 Jan 29 11:22:25.522631 ignition[933]: INFO : Stage: mount Jan 29 11:22:25.522631 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:22:25.522631 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:22:25.522631 ignition[933]: INFO : mount: mount passed Jan 29 11:22:25.522631 ignition[933]: INFO : Ignition finished successfully Jan 29 11:22:25.524406 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:22:25.536271 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:22:25.952682 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:22:25.965269 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:22:25.971149 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (948) Jan 29 11:22:25.971175 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:22:25.973630 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:22:25.973651 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:22:25.976149 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:22:25.977498 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:22:26.003656 ignition[965]: INFO : Ignition 2.20.0 Jan 29 11:22:26.003656 ignition[965]: INFO : Stage: files Jan 29 11:22:26.005448 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:22:26.005448 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:22:26.008073 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:22:26.009670 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:22:26.009670 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:22:26.012715 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:22:26.014233 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:22:26.016034 unknown[965]: wrote ssh authorized keys file for user: core Jan 29 11:22:26.017262 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:22:26.017262 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:22:26.020528 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:22:26.020528 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:22:26.020528 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:22:26.052273 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:22:26.135356 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:22:26.137253 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:22:26.137253 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:22:26.137253 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:22:26.142466 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:22:26.142466 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:22:26.142466 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:22:26.142466 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:22:26.142466 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:22:26.142466 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:22:26.142466 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:22:26.142466 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:22:26.142466 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:22:26.142466 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:22:26.142466 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:22:26.597335 systemd-networkd[787]: eth0: Gained IPv6LL Jan 29 11:22:26.622637 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:22:26.998159 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:22:26.998159 ignition[965]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 11:22:27.001863 ignition[965]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:22:27.004415 ignition[965]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:22:27.004415 ignition[965]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 11:22:27.004415 ignition[965]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 11:22:27.009030 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:22:27.010859 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:22:27.010859 ignition[965]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 11:22:27.010859 ignition[965]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 29 11:22:27.015146 ignition[965]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:22:27.017076 ignition[965]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:22:27.017076 ignition[965]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 29 11:22:27.020775 ignition[965]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:22:27.039290 ignition[965]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:22:27.044612 ignition[965]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:22:27.046287 ignition[965]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:22:27.046287 ignition[965]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:22:27.049082 ignition[965]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:22:27.050536 ignition[965]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:22:27.052323 ignition[965]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:22:27.053994 ignition[965]: INFO : files: files passed Jan 29 11:22:27.054751 ignition[965]: INFO : Ignition finished successfully Jan 29 11:22:27.058329 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:22:27.066391 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:22:27.069409 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:22:27.070840 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:22:27.070954 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:22:27.079006 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:22:27.082101 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:22:27.082101 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:22:27.086519 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:22:27.084893 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:22:27.086685 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:22:27.096248 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:22:27.119092 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:22:27.119233 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:22:27.121588 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:22:27.122676 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:22:27.124628 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:22:27.128042 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:22:27.147411 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:22:27.161325 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:22:27.169740 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:22:27.172056 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:22:27.173339 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:22:27.175354 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:22:27.175469 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:22:27.176602 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:22:27.176946 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:22:27.177461 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:22:27.177793 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:22:27.178141 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:22:27.178649 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:22:27.178987 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:22:27.179679 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:22:27.180008 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:22:27.180517 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:22:27.180828 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:22:27.180958 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:22:27.181737 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:22:27.182123 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:22:27.182601 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:22:27.182702 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:22:27.182949 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:22:27.183057 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:22:27.206717 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:22:27.206827 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:22:27.207158 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:22:27.207586 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:22:27.213165 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:22:27.213454 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:22:27.213779 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:22:27.214126 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:22:27.214229 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:22:27.214664 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:22:27.214742 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:22:27.222101 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:22:27.222215 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:22:27.222609 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:22:27.222705 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:22:27.237375 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:22:27.238332 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:22:27.238458 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:22:27.241243 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:22:27.242350 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:22:27.242495 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:22:27.244912 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:22:27.245038 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:22:27.249324 ignition[1019]: INFO : Ignition 2.20.0 Jan 29 11:22:27.249869 ignition[1019]: INFO : Stage: umount Jan 29 11:22:27.250023 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:22:27.250023 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:22:27.250882 ignition[1019]: INFO : umount: umount passed Jan 29 11:22:27.251381 ignition[1019]: INFO : Ignition finished successfully Jan 29 11:22:27.257533 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:22:27.258615 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:22:27.262480 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:22:27.263515 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:22:27.266688 systemd[1]: Stopped target network.target - Network. Jan 29 11:22:27.268486 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:22:27.268553 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:22:27.271690 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:22:27.272623 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:22:27.274606 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:22:27.274657 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:22:27.277501 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:22:27.278480 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:22:27.280692 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:22:27.282962 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:22:27.286166 systemd-networkd[787]: eth0: DHCPv6 lease lost Jan 29 11:22:27.286206 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:22:27.289060 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:22:27.290221 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:22:27.292787 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:22:27.293833 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:22:27.297774 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:22:27.297832 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:22:27.309251 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:22:27.311204 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:22:27.311259 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:22:27.313578 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:22:27.314916 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:22:27.317809 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:22:27.317862 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:22:27.321033 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:22:27.322093 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:22:27.324647 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:22:27.334062 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:22:27.334269 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:22:27.335455 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:22:27.335565 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:22:27.338156 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:22:27.338229 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:22:27.339437 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:22:27.339481 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:22:27.341320 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:22:27.341372 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:22:27.345562 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:22:27.345613 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:22:27.347633 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:22:27.347685 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:22:27.355309 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:22:27.355374 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:22:27.355437 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:22:27.357630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:22:27.357678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:22:27.365307 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:22:27.365436 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:22:27.436473 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:22:27.436598 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:22:27.438579 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:22:27.440384 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:22:27.440434 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:22:27.454247 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:22:27.460309 systemd[1]: Switching root. Jan 29 11:22:27.491275 systemd-journald[194]: Journal stopped Jan 29 11:22:28.600004 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 29 11:22:28.600062 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:22:28.600076 kernel: SELinux: policy capability open_perms=1 Jan 29 11:22:28.600091 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:22:28.600103 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:22:28.600114 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:22:28.600129 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:22:28.600155 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:22:28.600166 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:22:28.600177 kernel: audit: type=1403 audit(1738149747.909:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:22:28.600190 systemd[1]: Successfully loaded SELinux policy in 38.781ms. Jan 29 11:22:28.600214 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.260ms. Jan 29 11:22:28.600227 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:22:28.600239 systemd[1]: Detected virtualization kvm. Jan 29 11:22:28.600255 systemd[1]: Detected architecture x86-64. Jan 29 11:22:28.600274 systemd[1]: Detected first boot. Jan 29 11:22:28.600288 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:22:28.600300 zram_generator::config[1081]: No configuration found. Jan 29 11:22:28.600314 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:22:28.600328 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:22:28.600339 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:22:28.600352 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:22:28.600363 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:22:28.600375 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:22:28.600389 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:22:28.600402 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:22:28.600414 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:22:28.600426 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:22:28.600438 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:22:28.600455 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:22:28.600467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:22:28.600480 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:22:28.600492 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:22:28.600507 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:22:28.600519 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:22:28.600533 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:22:28.600546 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:22:28.600560 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:22:28.600572 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:22:28.600584 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:22:28.600598 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:22:28.600612 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:22:28.600624 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:22:28.600636 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:22:28.600647 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:22:28.600659 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:22:28.600671 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:22:28.600683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:22:28.600695 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:22:28.600707 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:22:28.600721 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:22:28.600733 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:22:28.600745 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:22:28.600757 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:22:28.600769 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:22:28.600782 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:22:28.600794 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:22:28.600805 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:22:28.600819 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:22:28.600832 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:22:28.600843 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:22:28.600857 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:22:28.600869 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:22:28.600880 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:22:28.600893 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:22:28.600905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:22:28.600917 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:22:28.600932 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 11:22:28.600945 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 11:22:28.600957 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:22:28.600969 kernel: fuse: init (API version 7.39) Jan 29 11:22:28.600980 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:22:28.601010 systemd-journald[1172]: Collecting audit messages is disabled. Jan 29 11:22:28.601034 systemd-journald[1172]: Journal started Jan 29 11:22:28.601058 systemd-journald[1172]: Runtime Journal (/run/log/journal/5aca52f2b5324630a4d682542919a318) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:22:28.603154 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:22:28.608608 kernel: ACPI: bus type drm_connector registered Jan 29 11:22:28.608641 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:22:28.614157 kernel: loop: module loaded Jan 29 11:22:28.617145 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:22:28.617178 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:22:28.622421 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:22:28.623773 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:22:28.624997 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:22:28.626286 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:22:28.627405 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:22:28.628622 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:22:28.629858 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:22:28.631239 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:22:28.632811 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:22:28.634415 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:22:28.634641 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:22:28.636172 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:22:28.636396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:22:28.637878 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:22:28.638096 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:22:28.639520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:22:28.639730 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:22:28.641306 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:22:28.641521 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:22:28.642995 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:22:28.643247 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:22:28.645400 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:22:28.647199 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:22:28.648972 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:22:28.664117 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:22:28.679299 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:22:28.682268 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:22:28.683438 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:22:28.688275 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:22:28.693277 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:22:28.695038 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:22:28.699197 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:22:28.700494 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:22:28.702826 systemd-journald[1172]: Time spent on flushing to /var/log/journal/5aca52f2b5324630a4d682542919a318 is 29.890ms for 1029 entries. Jan 29 11:22:28.702826 systemd-journald[1172]: System Journal (/var/log/journal/5aca52f2b5324630a4d682542919a318) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:22:28.741205 systemd-journald[1172]: Received client request to flush runtime journal. Jan 29 11:22:28.703322 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:22:28.709276 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:22:28.712560 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:22:28.717397 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:22:28.718883 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:22:28.732351 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:22:28.734654 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:22:28.737145 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:22:28.739429 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:22:28.744311 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:22:28.748503 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jan 29 11:22:28.748521 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jan 29 11:22:28.748784 udevadm[1225]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:22:28.755338 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:22:28.766263 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:22:28.790681 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:22:28.804246 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:22:28.820075 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 29 11:22:28.820097 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 29 11:22:28.825666 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:22:29.229360 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:22:29.242538 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:22:29.265931 systemd-udevd[1245]: Using default interface naming scheme 'v255'. Jan 29 11:22:29.282383 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:22:29.293288 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:22:29.306296 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:22:29.309912 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 11:22:29.322173 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1265) Jan 29 11:22:29.366168 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:22:29.369752 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:22:29.377161 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:22:29.382726 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 29 11:22:29.385099 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:22:29.385282 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:22:29.385455 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:22:29.397681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:22:29.407216 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:22:29.436164 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:22:29.443423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:22:29.446995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:22:29.448464 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:22:29.460384 systemd-networkd[1252]: lo: Link UP Jan 29 11:22:29.460397 systemd-networkd[1252]: lo: Gained carrier Jan 29 11:22:29.461953 systemd-networkd[1252]: Enumeration completed Jan 29 11:22:29.462367 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:22:29.462371 systemd-networkd[1252]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:22:29.463040 systemd-networkd[1252]: eth0: Link UP Jan 29 11:22:29.463044 systemd-networkd[1252]: eth0: Gained carrier Jan 29 11:22:29.463055 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:22:29.490746 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:22:29.492345 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:22:29.509367 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:22:29.512208 systemd-networkd[1252]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:22:29.519152 kernel: kvm_amd: TSC scaling supported Jan 29 11:22:29.519202 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:22:29.519216 kernel: kvm_amd: Nested Paging enabled Jan 29 11:22:29.519227 kernel: kvm_amd: LBR virtualization supported Jan 29 11:22:29.519247 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:22:29.519259 kernel: kvm_amd: Virtual GIF supported Jan 29 11:22:29.541150 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:22:29.557465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:22:29.567773 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:22:29.579261 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:22:29.588374 lvm[1294]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:22:29.622065 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:22:29.623681 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:22:29.630250 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:22:29.635625 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:22:29.668530 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:22:29.669984 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:22:29.671253 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:22:29.671280 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:22:29.672333 systemd[1]: Reached target machines.target - Containers. Jan 29 11:22:29.674346 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:22:29.685250 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:22:29.688039 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:22:29.689183 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:22:29.690227 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:22:29.692820 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:22:29.696461 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:22:29.698589 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:22:29.708154 kernel: loop0: detected capacity change from 0 to 210664 Jan 29 11:22:29.709426 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:22:29.718798 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:22:29.719617 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:22:29.724179 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:22:29.746150 kernel: loop1: detected capacity change from 0 to 140992 Jan 29 11:22:29.787160 kernel: loop2: detected capacity change from 0 to 138184 Jan 29 11:22:29.818158 kernel: loop3: detected capacity change from 0 to 210664 Jan 29 11:22:29.826310 kernel: loop4: detected capacity change from 0 to 140992 Jan 29 11:22:29.836632 kernel: loop5: detected capacity change from 0 to 138184 Jan 29 11:22:29.843909 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:22:29.844518 (sd-merge)[1317]: Merged extensions into '/usr'. Jan 29 11:22:29.848663 systemd[1]: Reloading requested from client PID 1305 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:22:29.848682 systemd[1]: Reloading... Jan 29 11:22:29.897152 zram_generator::config[1345]: No configuration found. Jan 29 11:22:29.924814 ldconfig[1302]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:22:30.025670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:22:30.089275 systemd[1]: Reloading finished in 240 ms. Jan 29 11:22:30.108268 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:22:30.109848 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:22:30.125303 systemd[1]: Starting ensure-sysext.service... Jan 29 11:22:30.127534 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:22:30.131730 systemd[1]: Reloading requested from client PID 1389 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:22:30.131743 systemd[1]: Reloading... Jan 29 11:22:30.149638 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:22:30.150017 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:22:30.151052 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:22:30.151390 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Jan 29 11:22:30.151476 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Jan 29 11:22:30.154971 systemd-tmpfiles[1390]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:22:30.154986 systemd-tmpfiles[1390]: Skipping /boot Jan 29 11:22:30.165626 systemd-tmpfiles[1390]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:22:30.165637 systemd-tmpfiles[1390]: Skipping /boot Jan 29 11:22:30.177157 zram_generator::config[1421]: No configuration found. Jan 29 11:22:30.291370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:22:30.355305 systemd[1]: Reloading finished in 223 ms. Jan 29 11:22:30.373046 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:22:30.393636 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:22:30.396517 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:22:30.399444 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:22:30.403568 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:22:30.410264 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:22:30.416537 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:22:30.416697 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:22:30.420925 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:22:30.424918 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:22:30.430378 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:22:30.431825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:22:30.432068 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:22:30.433060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:22:30.433657 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:22:30.436084 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:22:30.436407 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:22:30.440012 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:22:30.440387 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:22:30.442148 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:22:30.451798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:22:30.452614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:22:30.458018 augenrules[1502]: No rules Jan 29 11:22:30.459433 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:22:30.463981 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:22:30.470318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:22:30.474459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:22:30.478342 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:22:30.479468 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:22:30.480708 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:22:30.481014 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:22:30.482601 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:22:30.484373 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:22:30.484855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:22:30.485063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:22:30.485880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:22:30.486092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:22:30.486378 systemd-resolved[1467]: Positive Trust Anchors: Jan 29 11:22:30.486711 systemd-resolved[1467]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:22:30.486794 systemd-resolved[1467]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:22:30.487231 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:22:30.488036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:22:30.490508 systemd-resolved[1467]: Defaulting to hostname 'linux'. Jan 29 11:22:30.495432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:22:30.504259 systemd[1]: Reached target network.target - Network. Jan 29 11:22:30.505284 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:22:30.506543 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:22:30.517314 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:22:30.518347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:22:30.519610 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:22:30.521793 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:22:30.525242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:22:30.528800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:22:30.529916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:22:30.530038 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:22:30.530115 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:22:30.531308 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:22:30.534909 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:22:30.535148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:22:30.535635 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:22:30.535838 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:22:30.536571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:22:30.536772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:22:30.540607 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:22:30.540872 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:22:30.544540 augenrules[1523]: /sbin/augenrules: No change Jan 29 11:22:30.544986 systemd[1]: Finished ensure-sysext.service. Jan 29 11:22:30.550347 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:22:30.550419 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:22:30.553018 augenrules[1556]: No rules Jan 29 11:22:30.565377 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:22:30.566983 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:22:30.567362 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:22:30.629285 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:22:31.522535 systemd-resolved[1467]: Clock change detected. Flushing caches. Jan 29 11:22:31.522562 systemd-timesyncd[1561]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:22:31.522603 systemd-timesyncd[1561]: Initial clock synchronization to Wed 2025-01-29 11:22:31.522485 UTC. Jan 29 11:22:31.523459 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:22:31.524682 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:22:31.525999 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:22:31.527298 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:22:31.528598 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:22:31.528630 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:22:31.529584 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:22:31.530830 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:22:31.532110 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:22:31.533388 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:22:31.535107 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:22:31.538438 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:22:31.540800 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:22:31.545911 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:22:31.547045 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:22:31.548033 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:22:31.549155 systemd[1]: System is tainted: cgroupsv1 Jan 29 11:22:31.549194 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:22:31.549216 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:22:31.550703 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:22:31.553204 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:22:31.555674 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:22:31.558797 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:22:31.560882 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:22:31.562788 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:22:31.565133 jq[1569]: false Jan 29 11:22:31.567143 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:22:31.573218 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:22:31.576818 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:22:31.587357 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:22:31.589345 extend-filesystems[1571]: Found loop3 Jan 29 11:22:31.589345 extend-filesystems[1571]: Found loop4 Jan 29 11:22:31.591368 extend-filesystems[1571]: Found loop5 Jan 29 11:22:31.591368 extend-filesystems[1571]: Found sr0 Jan 29 11:22:31.591368 extend-filesystems[1571]: Found vda Jan 29 11:22:31.591368 extend-filesystems[1571]: Found vda1 Jan 29 11:22:31.591368 extend-filesystems[1571]: Found vda2 Jan 29 11:22:31.591368 extend-filesystems[1571]: Found vda3 Jan 29 11:22:31.591368 extend-filesystems[1571]: Found usr Jan 29 11:22:31.591368 extend-filesystems[1571]: Found vda4 Jan 29 11:22:31.591368 extend-filesystems[1571]: Found vda6 Jan 29 11:22:31.591368 extend-filesystems[1571]: Found vda7 Jan 29 11:22:31.591368 extend-filesystems[1571]: Found vda9 Jan 29 11:22:31.591368 extend-filesystems[1571]: Checking size of /dev/vda9 Jan 29 11:22:31.611706 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1254) Jan 29 11:22:31.589849 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:22:31.601457 dbus-daemon[1568]: [system] SELinux support is enabled Jan 29 11:22:31.591670 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:22:31.602755 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:22:31.604985 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:22:31.609499 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:22:31.610883 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:22:31.611221 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:22:31.611518 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:22:31.614138 extend-filesystems[1571]: Resized partition /dev/vda9 Jan 29 11:22:31.616897 jq[1591]: true Jan 29 11:22:31.615563 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:22:31.617896 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:22:31.624275 update_engine[1589]: I20250129 11:22:31.622682 1589 main.cc:92] Flatcar Update Engine starting Jan 29 11:22:31.625268 update_engine[1589]: I20250129 11:22:31.625202 1589 update_check_scheduler.cc:74] Next update check in 10m5s Jan 29 11:22:31.635300 extend-filesystems[1600]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:22:31.637037 (ntainerd)[1603]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:22:31.640063 jq[1599]: true Jan 29 11:22:31.645674 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:22:31.652817 systemd-networkd[1252]: eth0: Gained IPv6LL Jan 29 11:22:31.659834 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:22:31.668410 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:22:31.669277 tar[1597]: linux-amd64/helm Jan 29 11:22:31.670103 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:22:31.680844 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:22:31.684737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:22:31.691284 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:22:31.692372 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:22:31.692404 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:22:31.695308 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:22:31.695332 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:22:31.697969 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:22:31.698809 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:22:31.710164 systemd-logind[1584]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:22:31.710185 systemd-logind[1584]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:22:31.725525 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:22:31.710698 systemd-logind[1584]: New seat seat0. Jan 29 11:22:31.723989 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:22:31.735191 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:22:31.737809 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:22:31.738118 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:22:31.739681 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:22:31.756709 extend-filesystems[1600]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:22:31.756709 extend-filesystems[1600]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:22:31.756709 extend-filesystems[1600]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:22:31.764314 extend-filesystems[1571]: Resized filesystem in /dev/vda9 Jan 29 11:22:31.766404 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:22:31.762033 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:22:31.764223 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:22:31.769498 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:22:31.776484 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:22:31.781007 locksmithd[1635]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:22:31.782687 sshd_keygen[1594]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:22:31.808309 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:22:31.818920 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:22:31.826134 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:22:31.826457 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:22:31.841298 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:22:31.854492 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:22:31.863072 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:22:31.865780 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:22:31.867161 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:22:31.886174 containerd[1603]: time="2025-01-29T11:22:31.886094307Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:22:31.909150 containerd[1603]: time="2025-01-29T11:22:31.909032735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:22:31.910759 containerd[1603]: time="2025-01-29T11:22:31.910728775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:22:31.910759 containerd[1603]: time="2025-01-29T11:22:31.910757018Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:22:31.910833 containerd[1603]: time="2025-01-29T11:22:31.910773900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:22:31.910945 containerd[1603]: time="2025-01-29T11:22:31.910925284Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:22:31.910980 containerd[1603]: time="2025-01-29T11:22:31.910945712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:22:31.911078 containerd[1603]: time="2025-01-29T11:22:31.911009422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:22:31.911078 containerd[1603]: time="2025-01-29T11:22:31.911026684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:22:31.911267 containerd[1603]: time="2025-01-29T11:22:31.911243621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:22:31.911267 containerd[1603]: time="2025-01-29T11:22:31.911261144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:22:31.911316 containerd[1603]: time="2025-01-29T11:22:31.911274699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:22:31.911316 containerd[1603]: time="2025-01-29T11:22:31.911284818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:22:31.911396 containerd[1603]: time="2025-01-29T11:22:31.911379445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:22:31.911657 containerd[1603]: time="2025-01-29T11:22:31.911613054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:22:31.911813 containerd[1603]: time="2025-01-29T11:22:31.911794344Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:22:31.911841 containerd[1603]: time="2025-01-29T11:22:31.911812367Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:22:31.911922 containerd[1603]: time="2025-01-29T11:22:31.911905282Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:22:31.911978 containerd[1603]: time="2025-01-29T11:22:31.911963431Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:22:31.917097 containerd[1603]: time="2025-01-29T11:22:31.917062523Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:22:31.917133 containerd[1603]: time="2025-01-29T11:22:31.917100595Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:22:31.917133 containerd[1603]: time="2025-01-29T11:22:31.917115493Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:22:31.917133 containerd[1603]: time="2025-01-29T11:22:31.917129389Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:22:31.917185 containerd[1603]: time="2025-01-29T11:22:31.917141842Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:22:31.917313 containerd[1603]: time="2025-01-29T11:22:31.917261637Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:22:31.917944 containerd[1603]: time="2025-01-29T11:22:31.917601504Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:22:31.917944 containerd[1603]: time="2025-01-29T11:22:31.917810236Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:22:31.917944 containerd[1603]: time="2025-01-29T11:22:31.917826817Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:22:31.917944 containerd[1603]: time="2025-01-29T11:22:31.917844520Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:22:31.917944 containerd[1603]: time="2025-01-29T11:22:31.917859368Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:22:31.917944 containerd[1603]: time="2025-01-29T11:22:31.917873484Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:22:31.917944 containerd[1603]: time="2025-01-29T11:22:31.917886458Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:22:31.917944 containerd[1603]: time="2025-01-29T11:22:31.917900655Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:22:31.917944 containerd[1603]: time="2025-01-29T11:22:31.917916525Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:22:31.917944 containerd[1603]: time="2025-01-29T11:22:31.917930391Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:22:31.917944 containerd[1603]: time="2025-01-29T11:22:31.917944227Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:22:31.917944 containerd[1603]: time="2025-01-29T11:22:31.917955809Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.917978100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.917993319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918005963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918019558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918033164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918046519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918057650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918070975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918084750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918100891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918112242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918123323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918136658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918151465Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:22:31.918245 containerd[1603]: time="2025-01-29T11:22:31.918171583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918516 containerd[1603]: time="2025-01-29T11:22:31.918188194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918516 containerd[1603]: time="2025-01-29T11:22:31.918199215Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:22:31.918516 containerd[1603]: time="2025-01-29T11:22:31.918251954Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:22:31.918516 containerd[1603]: time="2025-01-29T11:22:31.918269918Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:22:31.918516 containerd[1603]: time="2025-01-29T11:22:31.918280948Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:22:31.918516 containerd[1603]: time="2025-01-29T11:22:31.918292620Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:22:31.918516 containerd[1603]: time="2025-01-29T11:22:31.918302609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918516 containerd[1603]: time="2025-01-29T11:22:31.918314762Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:22:31.918516 containerd[1603]: time="2025-01-29T11:22:31.918325372Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:22:31.918516 containerd[1603]: time="2025-01-29T11:22:31.918335450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:22:31.918733 containerd[1603]: time="2025-01-29T11:22:31.918618952Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:22:31.918733 containerd[1603]: time="2025-01-29T11:22:31.918678824Z" level=info msg="Connect containerd service" Jan 29 11:22:31.918733 containerd[1603]: time="2025-01-29T11:22:31.918725191Z" level=info msg="using legacy CRI server" Jan 29 11:22:31.918733 containerd[1603]: time="2025-01-29T11:22:31.918733397Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:22:31.920670 containerd[1603]: time="2025-01-29T11:22:31.920195960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:22:31.920923 containerd[1603]: time="2025-01-29T11:22:31.920902174Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:22:31.921263 containerd[1603]: time="2025-01-29T11:22:31.921222936Z" level=info msg="Start subscribing containerd event" Jan 29 11:22:31.921294 containerd[1603]: time="2025-01-29T11:22:31.921276756Z" level=info msg="Start recovering state" Jan 29 11:22:31.921358 containerd[1603]: time="2025-01-29T11:22:31.921339174Z" level=info msg="Start event monitor" Jan 29 11:22:31.921387 containerd[1603]: time="2025-01-29T11:22:31.921361966Z" level=info msg="Start snapshots syncer" Jan 29 11:22:31.921387 containerd[1603]: time="2025-01-29T11:22:31.921371755Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:22:31.921387 containerd[1603]: time="2025-01-29T11:22:31.921380711Z" level=info msg="Start streaming server" Jan 29 11:22:31.921731 containerd[1603]: time="2025-01-29T11:22:31.921696644Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:22:31.921791 containerd[1603]: time="2025-01-29T11:22:31.921770232Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:22:31.921924 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:22:31.923182 containerd[1603]: time="2025-01-29T11:22:31.923131004Z" level=info msg="containerd successfully booted in 0.038805s" Jan 29 11:22:32.119634 tar[1597]: linux-amd64/LICENSE Jan 29 11:22:32.119731 tar[1597]: linux-amd64/README.md Jan 29 11:22:32.138094 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:22:32.375514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:22:32.377247 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:22:32.378453 systemd[1]: Startup finished in 5.939s (kernel) + 3.615s (userspace) = 9.554s. Jan 29 11:22:32.383461 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:22:32.811162 kubelet[1705]: E0129 11:22:32.811044 1705 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:22:32.815033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:22:32.815318 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:22:41.127546 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:22:41.138916 systemd[1]: Started sshd@0-10.0.0.145:22-10.0.0.1:57772.service - OpenSSH per-connection server daemon (10.0.0.1:57772). Jan 29 11:22:41.190759 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 57772 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:22:41.192620 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:22:41.200592 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:22:41.212828 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:22:41.214383 systemd-logind[1584]: New session 1 of user core. Jan 29 11:22:41.226423 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:22:41.237890 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:22:41.241067 (systemd)[1725]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:22:41.348444 systemd[1725]: Queued start job for default target default.target. Jan 29 11:22:41.348859 systemd[1725]: Created slice app.slice - User Application Slice. Jan 29 11:22:41.348876 systemd[1725]: Reached target paths.target - Paths. Jan 29 11:22:41.348890 systemd[1725]: Reached target timers.target - Timers. Jan 29 11:22:41.361710 systemd[1725]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:22:41.368065 systemd[1725]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:22:41.368142 systemd[1725]: Reached target sockets.target - Sockets. Jan 29 11:22:41.368168 systemd[1725]: Reached target basic.target - Basic System. Jan 29 11:22:41.368210 systemd[1725]: Reached target default.target - Main User Target. Jan 29 11:22:41.368246 systemd[1725]: Startup finished in 120ms. Jan 29 11:22:41.368934 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:22:41.370717 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:22:41.425968 systemd[1]: Started sshd@1-10.0.0.145:22-10.0.0.1:57786.service - OpenSSH per-connection server daemon (10.0.0.1:57786). Jan 29 11:22:41.475244 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 57786 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:22:41.476875 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:22:41.481140 systemd-logind[1584]: New session 2 of user core. Jan 29 11:22:41.495126 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:22:41.550085 sshd[1740]: Connection closed by 10.0.0.1 port 57786 Jan 29 11:22:41.550451 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jan 29 11:22:41.569140 systemd[1]: Started sshd@2-10.0.0.145:22-10.0.0.1:57796.service - OpenSSH per-connection server daemon (10.0.0.1:57796). Jan 29 11:22:41.570048 systemd[1]: sshd@1-10.0.0.145:22-10.0.0.1:57786.service: Deactivated successfully. Jan 29 11:22:41.572442 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:22:41.573331 systemd-logind[1584]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:22:41.575235 systemd-logind[1584]: Removed session 2. Jan 29 11:22:41.607088 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 57796 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:22:41.608561 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:22:41.613306 systemd-logind[1584]: New session 3 of user core. Jan 29 11:22:41.628909 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:22:41.678511 sshd[1748]: Connection closed by 10.0.0.1 port 57796 Jan 29 11:22:41.679107 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jan 29 11:22:41.687984 systemd[1]: Started sshd@3-10.0.0.145:22-10.0.0.1:57806.service - OpenSSH per-connection server daemon (10.0.0.1:57806). Jan 29 11:22:41.688878 systemd[1]: sshd@2-10.0.0.145:22-10.0.0.1:57796.service: Deactivated successfully. Jan 29 11:22:41.691350 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:22:41.692061 systemd-logind[1584]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:22:41.693842 systemd-logind[1584]: Removed session 3. Jan 29 11:22:41.730434 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 57806 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:22:41.732169 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:22:41.737048 systemd-logind[1584]: New session 4 of user core. Jan 29 11:22:41.749064 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:22:41.806401 sshd[1756]: Connection closed by 10.0.0.1 port 57806 Jan 29 11:22:41.806758 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 29 11:22:41.815001 systemd[1]: Started sshd@4-10.0.0.145:22-10.0.0.1:57810.service - OpenSSH per-connection server daemon (10.0.0.1:57810). Jan 29 11:22:41.815623 systemd[1]: sshd@3-10.0.0.145:22-10.0.0.1:57806.service: Deactivated successfully. Jan 29 11:22:41.818818 systemd-logind[1584]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:22:41.818904 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:22:41.821429 systemd-logind[1584]: Removed session 4. Jan 29 11:22:41.853213 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 57810 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:22:41.854789 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:22:41.858856 systemd-logind[1584]: New session 5 of user core. Jan 29 11:22:41.873980 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:22:41.931348 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:22:41.931719 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:22:41.950008 sudo[1765]: pam_unix(sudo:session): session closed for user root Jan 29 11:22:41.951669 sshd[1764]: Connection closed by 10.0.0.1 port 57810 Jan 29 11:22:41.952070 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Jan 29 11:22:41.961973 systemd[1]: Started sshd@5-10.0.0.145:22-10.0.0.1:57812.service - OpenSSH per-connection server daemon (10.0.0.1:57812). Jan 29 11:22:41.962616 systemd[1]: sshd@4-10.0.0.145:22-10.0.0.1:57810.service: Deactivated successfully. Jan 29 11:22:41.964462 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:22:41.965245 systemd-logind[1584]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:22:41.966605 systemd-logind[1584]: Removed session 5. Jan 29 11:22:41.999995 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 57812 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:22:42.001625 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:22:42.005658 systemd-logind[1584]: New session 6 of user core. Jan 29 11:22:42.015925 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:22:42.069060 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:22:42.069403 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:22:42.072943 sudo[1775]: pam_unix(sudo:session): session closed for user root Jan 29 11:22:42.079175 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:22:42.079505 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:22:42.096888 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:22:42.126443 augenrules[1797]: No rules Jan 29 11:22:42.128278 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:22:42.128615 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:22:42.129865 sudo[1774]: pam_unix(sudo:session): session closed for user root Jan 29 11:22:42.131233 sshd[1773]: Connection closed by 10.0.0.1 port 57812 Jan 29 11:22:42.131560 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jan 29 11:22:42.148908 systemd[1]: Started sshd@6-10.0.0.145:22-10.0.0.1:57822.service - OpenSSH per-connection server daemon (10.0.0.1:57822). Jan 29 11:22:42.149615 systemd[1]: sshd@5-10.0.0.145:22-10.0.0.1:57812.service: Deactivated successfully. Jan 29 11:22:42.151390 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:22:42.152121 systemd-logind[1584]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:22:42.153313 systemd-logind[1584]: Removed session 6. Jan 29 11:22:42.186205 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 57822 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:22:42.187834 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:22:42.191427 systemd-logind[1584]: New session 7 of user core. Jan 29 11:22:42.200869 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:22:42.254517 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:22:42.254961 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:22:42.523976 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:22:42.524188 (dockerd)[1830]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:22:42.783554 dockerd[1830]: time="2025-01-29T11:22:42.783403742Z" level=info msg="Starting up" Jan 29 11:22:42.844374 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:22:42.851789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:22:43.588260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:22:43.592926 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:22:43.641840 kubelet[1866]: E0129 11:22:43.641772 1866 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:22:43.650108 dockerd[1830]: time="2025-01-29T11:22:43.649884913Z" level=info msg="Loading containers: start." Jan 29 11:22:43.650228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:22:43.650546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:22:43.836683 kernel: Initializing XFRM netlink socket Jan 29 11:22:43.925435 systemd-networkd[1252]: docker0: Link UP Jan 29 11:22:43.962348 dockerd[1830]: time="2025-01-29T11:22:43.962277791Z" level=info msg="Loading containers: done." Jan 29 11:22:43.977752 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4254520140-merged.mount: Deactivated successfully. Jan 29 11:22:43.980321 dockerd[1830]: time="2025-01-29T11:22:43.980238364Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:22:43.980456 dockerd[1830]: time="2025-01-29T11:22:43.980385881Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:22:43.980593 dockerd[1830]: time="2025-01-29T11:22:43.980563965Z" level=info msg="Daemon has completed initialization" Jan 29 11:22:44.019547 dockerd[1830]: time="2025-01-29T11:22:44.019461952Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:22:44.019627 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:22:44.876776 containerd[1603]: time="2025-01-29T11:22:44.876739594Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:22:45.435321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3546617871.mount: Deactivated successfully. Jan 29 11:22:46.434858 containerd[1603]: time="2025-01-29T11:22:46.434795645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:46.443738 containerd[1603]: time="2025-01-29T11:22:46.443653988Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 11:22:46.452964 containerd[1603]: time="2025-01-29T11:22:46.452914474Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:46.455878 containerd[1603]: time="2025-01-29T11:22:46.455808391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:46.457017 containerd[1603]: time="2025-01-29T11:22:46.456979428Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.580203866s" Jan 29 11:22:46.457095 containerd[1603]: time="2025-01-29T11:22:46.457020314Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 11:22:46.478746 containerd[1603]: time="2025-01-29T11:22:46.478694792Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:22:48.844946 containerd[1603]: time="2025-01-29T11:22:48.844861839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:48.845920 containerd[1603]: time="2025-01-29T11:22:48.845861294Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 11:22:48.854942 containerd[1603]: time="2025-01-29T11:22:48.854884846Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:48.858335 containerd[1603]: time="2025-01-29T11:22:48.858294270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:48.859404 containerd[1603]: time="2025-01-29T11:22:48.859366010Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.380624981s" Jan 29 11:22:48.859404 containerd[1603]: time="2025-01-29T11:22:48.859401887Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 11:22:48.883667 containerd[1603]: time="2025-01-29T11:22:48.883571985Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:22:50.037568 containerd[1603]: time="2025-01-29T11:22:50.037517464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:50.038292 containerd[1603]: time="2025-01-29T11:22:50.038263804Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 11:22:50.039432 containerd[1603]: time="2025-01-29T11:22:50.039405826Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:50.042262 containerd[1603]: time="2025-01-29T11:22:50.042225313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:50.043217 containerd[1603]: time="2025-01-29T11:22:50.043195543Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.159588492s" Jan 29 11:22:50.043261 containerd[1603]: time="2025-01-29T11:22:50.043219077Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 11:22:50.064762 containerd[1603]: time="2025-01-29T11:22:50.064722704Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:22:51.041508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541473157.mount: Deactivated successfully. Jan 29 11:22:52.027013 containerd[1603]: time="2025-01-29T11:22:52.026935283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:52.028771 containerd[1603]: time="2025-01-29T11:22:52.028726141Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 11:22:52.030498 containerd[1603]: time="2025-01-29T11:22:52.030437250Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:52.033011 containerd[1603]: time="2025-01-29T11:22:52.032964019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:52.033657 containerd[1603]: time="2025-01-29T11:22:52.033607285Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.968849025s" Jan 29 11:22:52.033701 containerd[1603]: time="2025-01-29T11:22:52.033640778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 11:22:52.057080 containerd[1603]: time="2025-01-29T11:22:52.057040140Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:22:52.590860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489088238.mount: Deactivated successfully. Jan 29 11:22:53.900729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:22:53.923791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:22:54.057397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:22:54.062247 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:22:54.115352 kubelet[2207]: E0129 11:22:54.115297 2207 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:22:54.120504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:22:54.120843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:22:54.425739 containerd[1603]: time="2025-01-29T11:22:54.425671620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:54.427575 containerd[1603]: time="2025-01-29T11:22:54.427510739Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:22:54.430533 containerd[1603]: time="2025-01-29T11:22:54.430498592Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:54.433788 containerd[1603]: time="2025-01-29T11:22:54.433751002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:54.434896 containerd[1603]: time="2025-01-29T11:22:54.434851996Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.377764727s" Jan 29 11:22:54.434896 containerd[1603]: time="2025-01-29T11:22:54.434891951Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:22:54.456580 containerd[1603]: time="2025-01-29T11:22:54.456540119Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:22:54.893448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2735437216.mount: Deactivated successfully. Jan 29 11:22:54.898793 containerd[1603]: time="2025-01-29T11:22:54.898756533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:54.899572 containerd[1603]: time="2025-01-29T11:22:54.899529733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 11:22:54.900733 containerd[1603]: time="2025-01-29T11:22:54.900702532Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:54.902928 containerd[1603]: time="2025-01-29T11:22:54.902900474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:54.903708 containerd[1603]: time="2025-01-29T11:22:54.903678173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 447.092869ms" Jan 29 11:22:54.903751 containerd[1603]: time="2025-01-29T11:22:54.903708890Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 11:22:54.922956 containerd[1603]: time="2025-01-29T11:22:54.922918465Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:22:55.450490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1565706034.mount: Deactivated successfully. Jan 29 11:22:57.168369 containerd[1603]: time="2025-01-29T11:22:57.168311008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:57.169098 containerd[1603]: time="2025-01-29T11:22:57.169070913Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 11:22:57.170241 containerd[1603]: time="2025-01-29T11:22:57.170191294Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:57.173030 containerd[1603]: time="2025-01-29T11:22:57.172984071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:22:57.176483 containerd[1603]: time="2025-01-29T11:22:57.174097019Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.251147095s" Jan 29 11:22:57.176483 containerd[1603]: time="2025-01-29T11:22:57.174328172Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 11:22:59.720161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:22:59.734843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:22:59.751013 systemd[1]: Reloading requested from client PID 2360 ('systemctl') (unit session-7.scope)... Jan 29 11:22:59.751029 systemd[1]: Reloading... Jan 29 11:22:59.827685 zram_generator::config[2402]: No configuration found. Jan 29 11:23:00.008318 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:23:00.085716 systemd[1]: Reloading finished in 334 ms. Jan 29 11:23:00.129131 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:23:00.129232 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:23:00.129571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:23:00.132710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:23:00.271775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:23:00.277089 (kubelet)[2460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:23:00.317211 kubelet[2460]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:23:00.317211 kubelet[2460]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:23:00.317211 kubelet[2460]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:23:00.317627 kubelet[2460]: I0129 11:23:00.317268 2460 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:23:00.682162 kubelet[2460]: I0129 11:23:00.682122 2460 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:23:00.682162 kubelet[2460]: I0129 11:23:00.682149 2460 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:23:00.682378 kubelet[2460]: I0129 11:23:00.682357 2460 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:23:00.695605 kubelet[2460]: I0129 11:23:00.695575 2460 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:23:00.696843 kubelet[2460]: E0129 11:23:00.696822 2460 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:00.706535 kubelet[2460]: I0129 11:23:00.706508 2460 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:23:00.707017 kubelet[2460]: I0129 11:23:00.706972 2460 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:23:00.707202 kubelet[2460]: I0129 11:23:00.707008 2460 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:23:00.707717 kubelet[2460]: I0129 11:23:00.707692 2460 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:23:00.707717 kubelet[2460]: I0129 11:23:00.707710 2460 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:23:00.707850 kubelet[2460]: I0129 11:23:00.707826 2460 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:23:00.708439 kubelet[2460]: I0129 11:23:00.708416 2460 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:23:00.708439 kubelet[2460]: I0129 11:23:00.708433 2460 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:23:00.708494 kubelet[2460]: I0129 11:23:00.708453 2460 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:23:00.708494 kubelet[2460]: I0129 11:23:00.708467 2460 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:23:00.711806 kubelet[2460]: W0129 11:23:00.711657 2460 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:00.711806 kubelet[2460]: E0129 11:23:00.711706 2460 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:00.711806 kubelet[2460]: W0129 11:23:00.711752 2460 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:00.711806 kubelet[2460]: E0129 11:23:00.711780 2460 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:00.713832 kubelet[2460]: I0129 11:23:00.713805 2460 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:23:00.715189 kubelet[2460]: I0129 11:23:00.715164 2460 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:23:00.715246 kubelet[2460]: W0129 11:23:00.715227 2460 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:23:00.715996 kubelet[2460]: I0129 11:23:00.715876 2460 server.go:1264] "Started kubelet" Jan 29 11:23:00.716199 kubelet[2460]: I0129 11:23:00.716157 2460 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:23:00.716555 kubelet[2460]: I0129 11:23:00.716541 2460 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:23:00.716590 kubelet[2460]: I0129 11:23:00.716576 2460 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:23:00.717793 kubelet[2460]: I0129 11:23:00.717100 2460 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:23:00.717793 kubelet[2460]: I0129 11:23:00.717728 2460 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:23:00.721165 kubelet[2460]: E0129 11:23:00.721130 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:23:00.721212 kubelet[2460]: I0129 11:23:00.721175 2460 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:23:00.721357 kubelet[2460]: I0129 11:23:00.721278 2460 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:23:00.721357 kubelet[2460]: I0129 11:23:00.721325 2460 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:23:00.721610 kubelet[2460]: W0129 11:23:00.721572 2460 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:00.721653 kubelet[2460]: E0129 11:23:00.721617 2460 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:00.722734 kubelet[2460]: E0129 11:23:00.722068 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="200ms" Jan 29 11:23:00.722734 kubelet[2460]: E0129 11:23:00.722533 2460 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.145:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.145:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f260435fddc3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:23:00.715854909 +0000 UTC m=+0.433624522,LastTimestamp:2025-01-29 11:23:00.715854909 +0000 UTC m=+0.433624522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:23:00.722734 kubelet[2460]: E0129 11:23:00.722663 2460 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:23:00.722854 kubelet[2460]: I0129 11:23:00.722772 2460 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:23:00.722854 kubelet[2460]: I0129 11:23:00.722841 2460 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:23:00.723717 kubelet[2460]: I0129 11:23:00.723695 2460 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:23:00.733826 kubelet[2460]: I0129 11:23:00.733796 2460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:23:00.735012 kubelet[2460]: I0129 11:23:00.734991 2460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:23:00.735012 kubelet[2460]: I0129 11:23:00.735014 2460 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:23:00.735076 kubelet[2460]: I0129 11:23:00.735032 2460 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:23:00.735076 kubelet[2460]: E0129 11:23:00.735066 2460 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:23:00.739721 kubelet[2460]: W0129 11:23:00.738920 2460 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:00.739721 kubelet[2460]: E0129 11:23:00.738948 2460 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:00.748424 kubelet[2460]: I0129 11:23:00.748395 2460 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:23:00.748424 kubelet[2460]: I0129 11:23:00.748409 2460 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:23:00.748424 kubelet[2460]: I0129 11:23:00.748425 2460 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:23:00.822596 kubelet[2460]: I0129 11:23:00.822563 2460 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:23:00.822832 kubelet[2460]: E0129 11:23:00.822801 2460 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jan 29 11:23:00.836138 kubelet[2460]: E0129 11:23:00.836084 2460 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:23:00.922562 kubelet[2460]: E0129 11:23:00.922530 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="400ms" Jan 29 11:23:01.024579 kubelet[2460]: I0129 11:23:01.024513 2460 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:23:01.024814 kubelet[2460]: E0129 11:23:01.024779 2460 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jan 29 11:23:01.031342 kubelet[2460]: I0129 11:23:01.031319 2460 policy_none.go:49] "None policy: Start" Jan 29 11:23:01.031771 kubelet[2460]: I0129 11:23:01.031753 2460 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:23:01.031804 kubelet[2460]: I0129 11:23:01.031776 2460 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:23:01.036815 kubelet[2460]: E0129 11:23:01.036792 2460 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:23:01.039201 kubelet[2460]: I0129 11:23:01.039177 2460 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:23:01.039391 kubelet[2460]: I0129 11:23:01.039361 2460 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:23:01.039469 kubelet[2460]: I0129 11:23:01.039455 2460 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:23:01.040572 kubelet[2460]: E0129 11:23:01.040553 2460 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:23:01.323207 kubelet[2460]: E0129 11:23:01.323182 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="800ms" Jan 29 11:23:01.426594 kubelet[2460]: I0129 11:23:01.426570 2460 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:23:01.426852 kubelet[2460]: E0129 11:23:01.426830 2460 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jan 29 11:23:01.436923 kubelet[2460]: I0129 11:23:01.436895 2460 topology_manager.go:215] "Topology Admit Handler" podUID="c87443542fdab119261fe737a411da1c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:23:01.437730 kubelet[2460]: I0129 11:23:01.437699 2460 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:23:01.438526 kubelet[2460]: I0129 11:23:01.438501 2460 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:23:01.525773 kubelet[2460]: I0129 11:23:01.525737 2460 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:23:01.525773 kubelet[2460]: I0129 11:23:01.525767 2460 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:23:01.525773 kubelet[2460]: I0129 11:23:01.525785 2460 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:23:01.525933 kubelet[2460]: I0129 11:23:01.525803 2460 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c87443542fdab119261fe737a411da1c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c87443542fdab119261fe737a411da1c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:23:01.525933 kubelet[2460]: I0129 11:23:01.525819 2460 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c87443542fdab119261fe737a411da1c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c87443542fdab119261fe737a411da1c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:23:01.525933 kubelet[2460]: I0129 11:23:01.525833 2460 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c87443542fdab119261fe737a411da1c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c87443542fdab119261fe737a411da1c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:23:01.525933 kubelet[2460]: I0129 11:23:01.525857 2460 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:23:01.525933 kubelet[2460]: I0129 11:23:01.525876 2460 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:23:01.526059 kubelet[2460]: I0129 11:23:01.525893 2460 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:23:01.742666 kubelet[2460]: E0129 11:23:01.742568 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:01.743055 kubelet[2460]: E0129 11:23:01.743015 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:01.743639 containerd[1603]: time="2025-01-29T11:23:01.743559492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 11:23:01.743639 containerd[1603]: time="2025-01-29T11:23:01.743594918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c87443542fdab119261fe737a411da1c,Namespace:kube-system,Attempt:0,}" Jan 29 11:23:01.744915 kubelet[2460]: E0129 11:23:01.744895 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:01.745214 containerd[1603]: time="2025-01-29T11:23:01.745120810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 11:23:01.868980 kubelet[2460]: W0129 11:23:01.868916 2460 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:01.869035 kubelet[2460]: E0129 11:23:01.868984 2460 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:01.934623 kubelet[2460]: W0129 11:23:01.934583 2460 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:01.934623 kubelet[2460]: E0129 11:23:01.934622 2460 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:02.124078 kubelet[2460]: E0129 11:23:02.124023 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="1.6s" Jan 29 11:23:02.187837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183101884.mount: Deactivated successfully. Jan 29 11:23:02.194248 containerd[1603]: time="2025-01-29T11:23:02.194188532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:23:02.196989 containerd[1603]: time="2025-01-29T11:23:02.196952165Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:23:02.198112 containerd[1603]: time="2025-01-29T11:23:02.198058670Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:23:02.200189 containerd[1603]: time="2025-01-29T11:23:02.200155132Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:23:02.200945 containerd[1603]: time="2025-01-29T11:23:02.200883628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:23:02.202094 containerd[1603]: time="2025-01-29T11:23:02.202054393Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:23:02.202868 containerd[1603]: time="2025-01-29T11:23:02.202799681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:23:02.203735 containerd[1603]: time="2025-01-29T11:23:02.203696864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:23:02.204186 kubelet[2460]: W0129 11:23:02.204120 2460 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:02.204186 kubelet[2460]: E0129 11:23:02.204189 2460 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:02.205269 containerd[1603]: time="2025-01-29T11:23:02.205243965Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 461.566512ms" Jan 29 11:23:02.206083 containerd[1603]: time="2025-01-29T11:23:02.206048774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 462.357435ms" Jan 29 11:23:02.209892 containerd[1603]: time="2025-01-29T11:23:02.209866323Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 464.696933ms" Jan 29 11:23:02.229312 kubelet[2460]: I0129 11:23:02.228972 2460 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:23:02.229312 kubelet[2460]: E0129 11:23:02.229274 2460 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jan 29 11:23:02.274211 kubelet[2460]: W0129 11:23:02.274092 2460 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:02.274211 kubelet[2460]: E0129 11:23:02.274176 2460 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:02.319357 containerd[1603]: time="2025-01-29T11:23:02.319162122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:02.319357 containerd[1603]: time="2025-01-29T11:23:02.319208890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:02.319357 containerd[1603]: time="2025-01-29T11:23:02.319222505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:02.319357 containerd[1603]: time="2025-01-29T11:23:02.319309448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:02.319537 containerd[1603]: time="2025-01-29T11:23:02.319372216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:02.319537 containerd[1603]: time="2025-01-29T11:23:02.319474588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:02.319537 containerd[1603]: time="2025-01-29T11:23:02.319496058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:02.320074 containerd[1603]: time="2025-01-29T11:23:02.319852888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:02.324625 containerd[1603]: time="2025-01-29T11:23:02.324527284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:02.324702 containerd[1603]: time="2025-01-29T11:23:02.324602044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:02.325360 containerd[1603]: time="2025-01-29T11:23:02.325317135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:02.325455 containerd[1603]: time="2025-01-29T11:23:02.325420339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:02.383936 containerd[1603]: time="2025-01-29T11:23:02.383218811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c87443542fdab119261fe737a411da1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"22ee414d8e323ecdd77afe943ff021edc655f9c3a9ca228dfc37d804e1ceb046\"" Jan 29 11:23:02.384021 containerd[1603]: time="2025-01-29T11:23:02.383817294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1821594ab6f9471f54675319ca44025a76e11579b8eb11f9405fa577e801cb5a\"" Jan 29 11:23:02.385263 kubelet[2460]: E0129 11:23:02.385189 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:02.385263 kubelet[2460]: E0129 11:23:02.385209 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:02.388740 containerd[1603]: time="2025-01-29T11:23:02.388246239Z" level=info msg="CreateContainer within sandbox \"22ee414d8e323ecdd77afe943ff021edc655f9c3a9ca228dfc37d804e1ceb046\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:23:02.389352 containerd[1603]: time="2025-01-29T11:23:02.389322949Z" level=info msg="CreateContainer within sandbox \"1821594ab6f9471f54675319ca44025a76e11579b8eb11f9405fa577e801cb5a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:23:02.390060 containerd[1603]: time="2025-01-29T11:23:02.389962458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"101332f85aad6e7d367981d652ae3eae78e321da5634ec981bfba2b1c1bb5e40\"" Jan 29 11:23:02.390816 kubelet[2460]: E0129 11:23:02.390795 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:02.395177 containerd[1603]: time="2025-01-29T11:23:02.395151770Z" level=info msg="CreateContainer within sandbox \"101332f85aad6e7d367981d652ae3eae78e321da5634ec981bfba2b1c1bb5e40\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:23:02.816765 containerd[1603]: time="2025-01-29T11:23:02.816635382Z" level=info msg="CreateContainer within sandbox \"101332f85aad6e7d367981d652ae3eae78e321da5634ec981bfba2b1c1bb5e40\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c7e98b78925a72e636e0c37ada431b4440f696d55734081008f5d632996e65b8\"" Jan 29 11:23:02.817483 containerd[1603]: time="2025-01-29T11:23:02.817416347Z" level=info msg="StartContainer for \"c7e98b78925a72e636e0c37ada431b4440f696d55734081008f5d632996e65b8\"" Jan 29 11:23:02.826454 containerd[1603]: time="2025-01-29T11:23:02.826394304Z" level=info msg="CreateContainer within sandbox \"1821594ab6f9471f54675319ca44025a76e11579b8eb11f9405fa577e801cb5a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3b9130368db7f877b14e08258de19c34d26481c686cbf673148cc375c0783fdb\"" Jan 29 11:23:02.826811 containerd[1603]: time="2025-01-29T11:23:02.826785838Z" level=info msg="StartContainer for \"3b9130368db7f877b14e08258de19c34d26481c686cbf673148cc375c0783fdb\"" Jan 29 11:23:02.830251 containerd[1603]: time="2025-01-29T11:23:02.830161839Z" level=info msg="CreateContainer within sandbox \"22ee414d8e323ecdd77afe943ff021edc655f9c3a9ca228dfc37d804e1ceb046\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f800e3d8d3c8e7c0c4467664dec4f3cd0c0a6cd7e5524eaca03c07c121d0c4f8\"" Jan 29 11:23:02.830554 containerd[1603]: time="2025-01-29T11:23:02.830523988Z" level=info msg="StartContainer for \"f800e3d8d3c8e7c0c4467664dec4f3cd0c0a6cd7e5524eaca03c07c121d0c4f8\"" Jan 29 11:23:02.874477 kubelet[2460]: E0129 11:23:02.874437 2460 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.145:6443: connect: connection refused Jan 29 11:23:02.902315 containerd[1603]: time="2025-01-29T11:23:02.902266501Z" level=info msg="StartContainer for \"3b9130368db7f877b14e08258de19c34d26481c686cbf673148cc375c0783fdb\" returns successfully" Jan 29 11:23:02.903662 containerd[1603]: time="2025-01-29T11:23:02.903000838Z" level=info msg="StartContainer for \"c7e98b78925a72e636e0c37ada431b4440f696d55734081008f5d632996e65b8\" returns successfully" Jan 29 11:23:02.907260 containerd[1603]: time="2025-01-29T11:23:02.907224198Z" level=info msg="StartContainer for \"f800e3d8d3c8e7c0c4467664dec4f3cd0c0a6cd7e5524eaca03c07c121d0c4f8\" returns successfully" Jan 29 11:23:03.741531 kubelet[2460]: E0129 11:23:03.741487 2460 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:23:03.757822 kubelet[2460]: E0129 11:23:03.756407 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:03.762928 kubelet[2460]: E0129 11:23:03.762870 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:03.764676 kubelet[2460]: E0129 11:23:03.764638 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:03.831772 kubelet[2460]: I0129 11:23:03.831746 2460 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:23:03.839354 kubelet[2460]: I0129 11:23:03.839321 2460 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:23:03.854962 kubelet[2460]: E0129 11:23:03.854935 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:23:03.955318 kubelet[2460]: E0129 11:23:03.955268 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:23:04.056275 kubelet[2460]: E0129 11:23:04.056137 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:23:04.157300 kubelet[2460]: E0129 11:23:04.157265 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:23:04.258215 kubelet[2460]: E0129 11:23:04.258181 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:23:04.358708 kubelet[2460]: E0129 11:23:04.358660 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:23:04.459241 kubelet[2460]: E0129 11:23:04.459195 2460 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:23:04.711251 kubelet[2460]: I0129 11:23:04.711101 2460 apiserver.go:52] "Watching apiserver" Jan 29 11:23:04.722146 kubelet[2460]: I0129 11:23:04.722105 2460 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:23:04.772283 kubelet[2460]: E0129 11:23:04.772236 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:04.774704 kubelet[2460]: E0129 11:23:04.774673 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:04.775073 kubelet[2460]: E0129 11:23:04.775027 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:05.705436 systemd[1]: Reloading requested from client PID 2739 ('systemctl') (unit session-7.scope)... Jan 29 11:23:05.705452 systemd[1]: Reloading... Jan 29 11:23:05.776668 kubelet[2460]: E0129 11:23:05.773588 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:05.776668 kubelet[2460]: E0129 11:23:05.774041 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:05.776668 kubelet[2460]: E0129 11:23:05.774597 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:05.781751 zram_generator::config[2781]: No configuration found. Jan 29 11:23:05.896900 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:23:05.974908 systemd[1]: Reloading finished in 269 ms. Jan 29 11:23:06.006439 kubelet[2460]: I0129 11:23:06.006307 2460 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:23:06.006358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:23:06.025080 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:23:06.025488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:23:06.041821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:23:06.181438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:23:06.191023 (kubelet)[2833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:23:06.237465 kubelet[2833]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:23:06.237465 kubelet[2833]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:23:06.237465 kubelet[2833]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:23:06.237889 kubelet[2833]: I0129 11:23:06.237463 2833 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:23:06.242533 kubelet[2833]: I0129 11:23:06.242499 2833 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:23:06.242533 kubelet[2833]: I0129 11:23:06.242526 2833 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:23:06.242754 kubelet[2833]: I0129 11:23:06.242732 2833 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:23:06.244028 kubelet[2833]: I0129 11:23:06.244007 2833 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:23:06.245033 kubelet[2833]: I0129 11:23:06.245000 2833 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:23:06.252088 kubelet[2833]: I0129 11:23:06.252049 2833 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:23:06.252639 kubelet[2833]: I0129 11:23:06.252590 2833 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:23:06.253006 kubelet[2833]: I0129 11:23:06.252824 2833 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:23:06.253006 kubelet[2833]: I0129 11:23:06.253005 2833 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:23:06.253144 kubelet[2833]: I0129 11:23:06.253015 2833 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:23:06.253144 kubelet[2833]: I0129 11:23:06.253056 2833 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:23:06.253206 kubelet[2833]: I0129 11:23:06.253151 2833 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:23:06.253206 kubelet[2833]: I0129 11:23:06.253163 2833 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:23:06.253206 kubelet[2833]: I0129 11:23:06.253183 2833 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:23:06.253206 kubelet[2833]: I0129 11:23:06.253203 2833 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:23:06.253996 kubelet[2833]: I0129 11:23:06.253688 2833 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:23:06.253996 kubelet[2833]: I0129 11:23:06.253889 2833 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:23:06.256531 kubelet[2833]: I0129 11:23:06.256510 2833 server.go:1264] "Started kubelet" Jan 29 11:23:06.258536 kubelet[2833]: I0129 11:23:06.258521 2833 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:23:06.260463 kubelet[2833]: I0129 11:23:06.260412 2833 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:23:06.261969 kubelet[2833]: I0129 11:23:06.261897 2833 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:23:06.262188 kubelet[2833]: I0129 11:23:06.262163 2833 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:23:06.263114 kubelet[2833]: I0129 11:23:06.263096 2833 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:23:06.263973 kubelet[2833]: I0129 11:23:06.263942 2833 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:23:06.264291 kubelet[2833]: I0129 11:23:06.264278 2833 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:23:06.267000 kubelet[2833]: I0129 11:23:06.265383 2833 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:23:06.268021 kubelet[2833]: I0129 11:23:06.267999 2833 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:23:06.268021 kubelet[2833]: I0129 11:23:06.268018 2833 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:23:06.268113 kubelet[2833]: I0129 11:23:06.268089 2833 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:23:06.269435 kubelet[2833]: E0129 11:23:06.268766 2833 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:23:06.274765 kubelet[2833]: I0129 11:23:06.274739 2833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:23:06.277295 kubelet[2833]: I0129 11:23:06.277269 2833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:23:06.277337 kubelet[2833]: I0129 11:23:06.277301 2833 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:23:06.277734 kubelet[2833]: I0129 11:23:06.277319 2833 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:23:06.277734 kubelet[2833]: E0129 11:23:06.277461 2833 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:23:06.317017 kubelet[2833]: I0129 11:23:06.316964 2833 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:23:06.317017 kubelet[2833]: I0129 11:23:06.316988 2833 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:23:06.317017 kubelet[2833]: I0129 11:23:06.317011 2833 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:23:06.317186 kubelet[2833]: I0129 11:23:06.317160 2833 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:23:06.317186 kubelet[2833]: I0129 11:23:06.317170 2833 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:23:06.317186 kubelet[2833]: I0129 11:23:06.317187 2833 policy_none.go:49] "None policy: Start" Jan 29 11:23:06.317889 kubelet[2833]: I0129 11:23:06.317750 2833 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:23:06.317943 kubelet[2833]: I0129 11:23:06.317895 2833 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:23:06.318060 kubelet[2833]: I0129 11:23:06.318047 2833 state_mem.go:75] "Updated machine memory state" Jan 29 11:23:06.319466 kubelet[2833]: I0129 11:23:06.319444 2833 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:23:06.319762 kubelet[2833]: I0129 11:23:06.319613 2833 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:23:06.319762 kubelet[2833]: I0129 11:23:06.319734 2833 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:23:06.368434 kubelet[2833]: I0129 11:23:06.368390 2833 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:23:06.373986 kubelet[2833]: I0129 11:23:06.373947 2833 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 11:23:06.374106 kubelet[2833]: I0129 11:23:06.374067 2833 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:23:06.377865 kubelet[2833]: I0129 11:23:06.377835 2833 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:23:06.377969 kubelet[2833]: I0129 11:23:06.377926 2833 topology_manager.go:215] "Topology Admit Handler" podUID="c87443542fdab119261fe737a411da1c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:23:06.377969 kubelet[2833]: I0129 11:23:06.377963 2833 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:23:06.385518 kubelet[2833]: E0129 11:23:06.385466 2833 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:23:06.385762 kubelet[2833]: E0129 11:23:06.385738 2833 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 11:23:06.385887 kubelet[2833]: E0129 11:23:06.385860 2833 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:23:06.466298 kubelet[2833]: I0129 11:23:06.466265 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:23:06.466298 kubelet[2833]: I0129 11:23:06.466291 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c87443542fdab119261fe737a411da1c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c87443542fdab119261fe737a411da1c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:23:06.466437 kubelet[2833]: I0129 11:23:06.466311 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:23:06.466437 kubelet[2833]: I0129 11:23:06.466327 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:23:06.466437 kubelet[2833]: I0129 11:23:06.466344 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:23:06.466437 kubelet[2833]: I0129 11:23:06.466358 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c87443542fdab119261fe737a411da1c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c87443542fdab119261fe737a411da1c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:23:06.466437 kubelet[2833]: I0129 11:23:06.466374 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c87443542fdab119261fe737a411da1c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c87443542fdab119261fe737a411da1c\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:23:06.466544 kubelet[2833]: I0129 11:23:06.466388 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:23:06.466544 kubelet[2833]: I0129 11:23:06.466405 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:23:06.686931 kubelet[2833]: E0129 11:23:06.686688 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:06.686931 kubelet[2833]: E0129 11:23:06.686745 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:06.686931 kubelet[2833]: E0129 11:23:06.686745 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:07.254668 kubelet[2833]: I0129 11:23:07.254297 2833 apiserver.go:52] "Watching apiserver" Jan 29 11:23:07.264586 kubelet[2833]: I0129 11:23:07.264556 2833 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:23:07.291000 kubelet[2833]: E0129 11:23:07.290958 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:07.293504 kubelet[2833]: E0129 11:23:07.291574 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:07.301808 kubelet[2833]: E0129 11:23:07.301026 2833 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:23:07.301808 kubelet[2833]: E0129 11:23:07.301382 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:07.309892 kubelet[2833]: I0129 11:23:07.309815 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.309774054 podStartE2EDuration="3.309774054s" podCreationTimestamp="2025-01-29 11:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:23:07.309243843 +0000 UTC m=+1.114299948" watchObservedRunningTime="2025-01-29 11:23:07.309774054 +0000 UTC m=+1.114830159" Jan 29 11:23:07.325026 kubelet[2833]: I0129 11:23:07.324240 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.324224407 podStartE2EDuration="3.324224407s" podCreationTimestamp="2025-01-29 11:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:23:07.323999665 +0000 UTC m=+1.129055770" watchObservedRunningTime="2025-01-29 11:23:07.324224407 +0000 UTC m=+1.129280512" Jan 29 11:23:07.325026 kubelet[2833]: I0129 11:23:07.324306 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.324302959 podStartE2EDuration="3.324302959s" podCreationTimestamp="2025-01-29 11:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:23:07.315223713 +0000 UTC m=+1.120279818" watchObservedRunningTime="2025-01-29 11:23:07.324302959 +0000 UTC m=+1.129359064" Jan 29 11:23:08.292004 kubelet[2833]: E0129 11:23:08.291954 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:09.859856 kubelet[2833]: E0129 11:23:09.859821 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:09.922704 kubelet[2833]: E0129 11:23:09.922635 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:10.613384 sudo[1810]: pam_unix(sudo:session): session closed for user root Jan 29 11:23:10.614774 sshd[1809]: Connection closed by 10.0.0.1 port 57822 Jan 29 11:23:10.615189 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:10.619764 systemd[1]: sshd@6-10.0.0.145:22-10.0.0.1:57822.service: Deactivated successfully. Jan 29 11:23:10.622030 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:23:10.622719 systemd-logind[1584]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:23:10.623527 systemd-logind[1584]: Removed session 7. Jan 29 11:23:11.429155 kubelet[2833]: E0129 11:23:11.429110 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:16.830426 update_engine[1589]: I20250129 11:23:16.830363 1589 update_attempter.cc:509] Updating boot flags... Jan 29 11:23:16.855706 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2929) Jan 29 11:23:16.887532 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2933) Jan 29 11:23:16.922994 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2933) Jan 29 11:23:19.863703 kubelet[2833]: E0129 11:23:19.863668 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:19.926230 kubelet[2833]: E0129 11:23:19.926196 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:21.176638 kubelet[2833]: I0129 11:23:21.176602 2833 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:23:21.177333 containerd[1603]: time="2025-01-29T11:23:21.177287740Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:23:21.177832 kubelet[2833]: I0129 11:23:21.177457 2833 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:23:21.434443 kubelet[2833]: E0129 11:23:21.434314 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:21.983668 kubelet[2833]: I0129 11:23:21.983528 2833 topology_manager.go:215] "Topology Admit Handler" podUID="ecca8b98-e572-4249-9752-48f70522b157" podNamespace="kube-system" podName="kube-proxy-sch4n" Jan 29 11:23:22.072217 kubelet[2833]: I0129 11:23:22.072163 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ecca8b98-e572-4249-9752-48f70522b157-kube-proxy\") pod \"kube-proxy-sch4n\" (UID: \"ecca8b98-e572-4249-9752-48f70522b157\") " pod="kube-system/kube-proxy-sch4n" Jan 29 11:23:22.072217 kubelet[2833]: I0129 11:23:22.072204 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m667s\" (UniqueName: \"kubernetes.io/projected/ecca8b98-e572-4249-9752-48f70522b157-kube-api-access-m667s\") pod \"kube-proxy-sch4n\" (UID: \"ecca8b98-e572-4249-9752-48f70522b157\") " pod="kube-system/kube-proxy-sch4n" Jan 29 11:23:22.072217 kubelet[2833]: I0129 11:23:22.072224 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecca8b98-e572-4249-9752-48f70522b157-xtables-lock\") pod \"kube-proxy-sch4n\" (UID: \"ecca8b98-e572-4249-9752-48f70522b157\") " pod="kube-system/kube-proxy-sch4n" Jan 29 11:23:22.072401 kubelet[2833]: I0129 11:23:22.072237 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecca8b98-e572-4249-9752-48f70522b157-lib-modules\") pod \"kube-proxy-sch4n\" (UID: \"ecca8b98-e572-4249-9752-48f70522b157\") " pod="kube-system/kube-proxy-sch4n" Jan 29 11:23:22.146348 kubelet[2833]: I0129 11:23:22.146194 2833 topology_manager.go:215] "Topology Admit Handler" podUID="253e128f-4997-42af-86a1-ff37698edc51" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-gzxtm" Jan 29 11:23:22.173336 kubelet[2833]: I0129 11:23:22.173295 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/253e128f-4997-42af-86a1-ff37698edc51-var-lib-calico\") pod \"tigera-operator-7bc55997bb-gzxtm\" (UID: \"253e128f-4997-42af-86a1-ff37698edc51\") " pod="tigera-operator/tigera-operator-7bc55997bb-gzxtm" Jan 29 11:23:22.173336 kubelet[2833]: I0129 11:23:22.173334 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbgd9\" (UniqueName: \"kubernetes.io/projected/253e128f-4997-42af-86a1-ff37698edc51-kube-api-access-qbgd9\") pod \"tigera-operator-7bc55997bb-gzxtm\" (UID: \"253e128f-4997-42af-86a1-ff37698edc51\") " pod="tigera-operator/tigera-operator-7bc55997bb-gzxtm" Jan 29 11:23:22.288981 kubelet[2833]: E0129 11:23:22.288891 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:22.289370 containerd[1603]: time="2025-01-29T11:23:22.289214857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sch4n,Uid:ecca8b98-e572-4249-9752-48f70522b157,Namespace:kube-system,Attempt:0,}" Jan 29 11:23:22.312561 containerd[1603]: time="2025-01-29T11:23:22.312314911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:22.312561 containerd[1603]: time="2025-01-29T11:23:22.312369035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:22.312561 containerd[1603]: time="2025-01-29T11:23:22.312384303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:22.312561 containerd[1603]: time="2025-01-29T11:23:22.312483461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:22.347419 containerd[1603]: time="2025-01-29T11:23:22.347336146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sch4n,Uid:ecca8b98-e572-4249-9752-48f70522b157,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a002e1a733d2e7fa37466ba0f07d5f4410933f41ee1b662cb742674bcdff9e2\"" Jan 29 11:23:22.347935 kubelet[2833]: E0129 11:23:22.347917 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:22.349854 containerd[1603]: time="2025-01-29T11:23:22.349818197Z" level=info msg="CreateContainer within sandbox \"4a002e1a733d2e7fa37466ba0f07d5f4410933f41ee1b662cb742674bcdff9e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:23:22.366619 containerd[1603]: time="2025-01-29T11:23:22.366575076Z" level=info msg="CreateContainer within sandbox \"4a002e1a733d2e7fa37466ba0f07d5f4410933f41ee1b662cb742674bcdff9e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"126a03d861744c23095f7b10fdc5fb31eb4a7cb5517c9ccca750549f0380115c\"" Jan 29 11:23:22.367046 containerd[1603]: time="2025-01-29T11:23:22.367020330Z" level=info msg="StartContainer for \"126a03d861744c23095f7b10fdc5fb31eb4a7cb5517c9ccca750549f0380115c\"" Jan 29 11:23:22.420750 containerd[1603]: time="2025-01-29T11:23:22.420712219Z" level=info msg="StartContainer for \"126a03d861744c23095f7b10fdc5fb31eb4a7cb5517c9ccca750549f0380115c\" returns successfully" Jan 29 11:23:22.452493 containerd[1603]: time="2025-01-29T11:23:22.452452158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-gzxtm,Uid:253e128f-4997-42af-86a1-ff37698edc51,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:23:22.479310 containerd[1603]: time="2025-01-29T11:23:22.478473305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:22.479310 containerd[1603]: time="2025-01-29T11:23:22.478532847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:22.479310 containerd[1603]: time="2025-01-29T11:23:22.478546704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:22.479310 containerd[1603]: time="2025-01-29T11:23:22.478634961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:22.533190 containerd[1603]: time="2025-01-29T11:23:22.533142605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-gzxtm,Uid:253e128f-4997-42af-86a1-ff37698edc51,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d4c12d0ebca255e5d17b184a18fa55f64cae8e1a9e8b7c51a27508ddfba86475\"" Jan 29 11:23:22.535272 containerd[1603]: time="2025-01-29T11:23:22.535236280Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:23:23.315973 kubelet[2833]: E0129 11:23:23.315940 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:23.323288 kubelet[2833]: I0129 11:23:23.323240 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sch4n" podStartSLOduration=2.323222529 podStartE2EDuration="2.323222529s" podCreationTimestamp="2025-01-29 11:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:23:23.323100096 +0000 UTC m=+17.128156191" watchObservedRunningTime="2025-01-29 11:23:23.323222529 +0000 UTC m=+17.128278624" Jan 29 11:23:23.985969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount742708260.mount: Deactivated successfully. Jan 29 11:23:24.388034 containerd[1603]: time="2025-01-29T11:23:24.387982973Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:24.388737 containerd[1603]: time="2025-01-29T11:23:24.388678519Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 11:23:24.389909 containerd[1603]: time="2025-01-29T11:23:24.389871536Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:24.392169 containerd[1603]: time="2025-01-29T11:23:24.392125230Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:24.392811 containerd[1603]: time="2025-01-29T11:23:24.392776081Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.857508692s" Jan 29 11:23:24.392811 containerd[1603]: time="2025-01-29T11:23:24.392802491Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 11:23:24.394935 containerd[1603]: time="2025-01-29T11:23:24.394896653Z" level=info msg="CreateContainer within sandbox \"d4c12d0ebca255e5d17b184a18fa55f64cae8e1a9e8b7c51a27508ddfba86475\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:23:24.502535 containerd[1603]: time="2025-01-29T11:23:24.502484983Z" level=info msg="CreateContainer within sandbox \"d4c12d0ebca255e5d17b184a18fa55f64cae8e1a9e8b7c51a27508ddfba86475\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1d24f6f19d0341efeba7e369621bbf64e8f7cb6b366ab148121c6c707473dce1\"" Jan 29 11:23:24.503125 containerd[1603]: time="2025-01-29T11:23:24.503061715Z" level=info msg="StartContainer for \"1d24f6f19d0341efeba7e369621bbf64e8f7cb6b366ab148121c6c707473dce1\"" Jan 29 11:23:24.565306 containerd[1603]: time="2025-01-29T11:23:24.565268099Z" level=info msg="StartContainer for \"1d24f6f19d0341efeba7e369621bbf64e8f7cb6b366ab148121c6c707473dce1\" returns successfully" Jan 29 11:23:27.405871 kubelet[2833]: I0129 11:23:27.405794 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-gzxtm" podStartSLOduration=3.5461285240000002 podStartE2EDuration="5.405774462s" podCreationTimestamp="2025-01-29 11:23:22 +0000 UTC" firstStartedPulling="2025-01-29 11:23:22.533976414 +0000 UTC m=+16.339032519" lastFinishedPulling="2025-01-29 11:23:24.393622352 +0000 UTC m=+18.198678457" observedRunningTime="2025-01-29 11:23:25.331448942 +0000 UTC m=+19.136505047" watchObservedRunningTime="2025-01-29 11:23:27.405774462 +0000 UTC m=+21.210830567" Jan 29 11:23:27.406474 kubelet[2833]: I0129 11:23:27.405943 2833 topology_manager.go:215] "Topology Admit Handler" podUID="659fbdea-0af7-4cb8-bb83-31663ca81960" podNamespace="calico-system" podName="calico-typha-65ddcc6dcd-xgpxn" Jan 29 11:23:27.438094 kubelet[2833]: I0129 11:23:27.438026 2833 topology_manager.go:215] "Topology Admit Handler" podUID="a604ab62-8067-431a-883d-8827e924e33c" podNamespace="calico-system" podName="calico-node-dk7q6" Jan 29 11:23:27.505006 kubelet[2833]: I0129 11:23:27.504901 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-lib-modules\") pod \"calico-node-dk7q6\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " pod="calico-system/calico-node-dk7q6" Jan 29 11:23:27.505006 kubelet[2833]: I0129 11:23:27.504963 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-xtables-lock\") pod \"calico-node-dk7q6\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " pod="calico-system/calico-node-dk7q6" Jan 29 11:23:27.505006 kubelet[2833]: I0129 11:23:27.504989 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-var-run-calico\") pod \"calico-node-dk7q6\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " pod="calico-system/calico-node-dk7q6" Jan 29 11:23:27.505006 kubelet[2833]: I0129 11:23:27.505030 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-var-lib-calico\") pod \"calico-node-dk7q6\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " pod="calico-system/calico-node-dk7q6" Jan 29 11:23:27.505305 kubelet[2833]: I0129 11:23:27.505056 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/659fbdea-0af7-4cb8-bb83-31663ca81960-tigera-ca-bundle\") pod \"calico-typha-65ddcc6dcd-xgpxn\" (UID: \"659fbdea-0af7-4cb8-bb83-31663ca81960\") " pod="calico-system/calico-typha-65ddcc6dcd-xgpxn" Jan 29 11:23:27.505305 kubelet[2833]: I0129 11:23:27.505079 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-policysync\") pod \"calico-node-dk7q6\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " pod="calico-system/calico-node-dk7q6" Jan 29 11:23:27.505305 kubelet[2833]: I0129 11:23:27.505098 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-cni-net-dir\") pod \"calico-node-dk7q6\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " pod="calico-system/calico-node-dk7q6" Jan 29 11:23:27.505305 kubelet[2833]: I0129 11:23:27.505120 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhz4v\" (UniqueName: \"kubernetes.io/projected/a604ab62-8067-431a-883d-8827e924e33c-kube-api-access-dhz4v\") pod \"calico-node-dk7q6\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " pod="calico-system/calico-node-dk7q6" Jan 29 11:23:27.505305 kubelet[2833]: I0129 11:23:27.505139 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/659fbdea-0af7-4cb8-bb83-31663ca81960-typha-certs\") pod \"calico-typha-65ddcc6dcd-xgpxn\" (UID: \"659fbdea-0af7-4cb8-bb83-31663ca81960\") " pod="calico-system/calico-typha-65ddcc6dcd-xgpxn" Jan 29 11:23:27.505468 kubelet[2833]: I0129 11:23:27.505159 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5jtq\" (UniqueName: \"kubernetes.io/projected/659fbdea-0af7-4cb8-bb83-31663ca81960-kube-api-access-t5jtq\") pod \"calico-typha-65ddcc6dcd-xgpxn\" (UID: \"659fbdea-0af7-4cb8-bb83-31663ca81960\") " pod="calico-system/calico-typha-65ddcc6dcd-xgpxn" Jan 29 11:23:27.505468 kubelet[2833]: I0129 11:23:27.505183 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a604ab62-8067-431a-883d-8827e924e33c-tigera-ca-bundle\") pod \"calico-node-dk7q6\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " pod="calico-system/calico-node-dk7q6" Jan 29 11:23:27.505468 kubelet[2833]: I0129 11:23:27.505203 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-cni-log-dir\") pod \"calico-node-dk7q6\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " pod="calico-system/calico-node-dk7q6" Jan 29 11:23:27.505468 kubelet[2833]: I0129 11:23:27.505223 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-flexvol-driver-host\") pod \"calico-node-dk7q6\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " pod="calico-system/calico-node-dk7q6" Jan 29 11:23:27.505468 kubelet[2833]: I0129 11:23:27.505243 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a604ab62-8067-431a-883d-8827e924e33c-node-certs\") pod \"calico-node-dk7q6\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " pod="calico-system/calico-node-dk7q6" Jan 29 11:23:27.506831 kubelet[2833]: I0129 11:23:27.505271 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-cni-bin-dir\") pod \"calico-node-dk7q6\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " pod="calico-system/calico-node-dk7q6" Jan 29 11:23:27.547068 kubelet[2833]: I0129 11:23:27.546996 2833 topology_manager.go:215] "Topology Admit Handler" podUID="35088270-b85c-4fff-9f47-df92a059da0a" podNamespace="calico-system" podName="csi-node-driver-gnqjx" Jan 29 11:23:27.547363 kubelet[2833]: E0129 11:23:27.547338 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gnqjx" podUID="35088270-b85c-4fff-9f47-df92a059da0a" Jan 29 11:23:27.606796 kubelet[2833]: I0129 11:23:27.606318 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/35088270-b85c-4fff-9f47-df92a059da0a-varrun\") pod \"csi-node-driver-gnqjx\" (UID: \"35088270-b85c-4fff-9f47-df92a059da0a\") " pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:27.606796 kubelet[2833]: I0129 11:23:27.606375 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35088270-b85c-4fff-9f47-df92a059da0a-kubelet-dir\") pod \"csi-node-driver-gnqjx\" (UID: \"35088270-b85c-4fff-9f47-df92a059da0a\") " pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:27.606796 kubelet[2833]: I0129 11:23:27.606484 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/35088270-b85c-4fff-9f47-df92a059da0a-registration-dir\") pod \"csi-node-driver-gnqjx\" (UID: \"35088270-b85c-4fff-9f47-df92a059da0a\") " pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:27.606796 kubelet[2833]: I0129 11:23:27.606556 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/35088270-b85c-4fff-9f47-df92a059da0a-socket-dir\") pod \"csi-node-driver-gnqjx\" (UID: \"35088270-b85c-4fff-9f47-df92a059da0a\") " pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:27.606796 kubelet[2833]: I0129 11:23:27.606591 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjzgp\" (UniqueName: \"kubernetes.io/projected/35088270-b85c-4fff-9f47-df92a059da0a-kube-api-access-rjzgp\") pod \"csi-node-driver-gnqjx\" (UID: \"35088270-b85c-4fff-9f47-df92a059da0a\") " pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:27.608689 kubelet[2833]: E0129 11:23:27.608657 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.608689 kubelet[2833]: W0129 11:23:27.608682 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.608810 kubelet[2833]: E0129 11:23:27.608716 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.612659 kubelet[2833]: E0129 11:23:27.612604 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.612659 kubelet[2833]: W0129 11:23:27.612622 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.612659 kubelet[2833]: E0129 11:23:27.612634 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.616136 kubelet[2833]: E0129 11:23:27.616102 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.616136 kubelet[2833]: W0129 11:23:27.616128 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.616225 kubelet[2833]: E0129 11:23:27.616155 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.616454 kubelet[2833]: E0129 11:23:27.616420 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.616454 kubelet[2833]: W0129 11:23:27.616442 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.616546 kubelet[2833]: E0129 11:23:27.616466 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.616782 kubelet[2833]: E0129 11:23:27.616751 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.616782 kubelet[2833]: W0129 11:23:27.616770 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.616782 kubelet[2833]: E0129 11:23:27.616781 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.707582 kubelet[2833]: E0129 11:23:27.707444 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.707582 kubelet[2833]: W0129 11:23:27.707481 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.707582 kubelet[2833]: E0129 11:23:27.707500 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.707870 kubelet[2833]: E0129 11:23:27.707840 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.707918 kubelet[2833]: W0129 11:23:27.707871 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.707918 kubelet[2833]: E0129 11:23:27.707905 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.708329 kubelet[2833]: E0129 11:23:27.708297 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.708456 kubelet[2833]: W0129 11:23:27.708312 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.708456 kubelet[2833]: E0129 11:23:27.708406 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.708850 kubelet[2833]: E0129 11:23:27.708828 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.708850 kubelet[2833]: W0129 11:23:27.708843 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.709002 kubelet[2833]: E0129 11:23:27.708863 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.709203 kubelet[2833]: E0129 11:23:27.709185 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.709203 kubelet[2833]: W0129 11:23:27.709199 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.709329 kubelet[2833]: E0129 11:23:27.709218 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.709479 kubelet[2833]: E0129 11:23:27.709464 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.709479 kubelet[2833]: W0129 11:23:27.709476 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.709550 kubelet[2833]: E0129 11:23:27.709514 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.709727 kubelet[2833]: E0129 11:23:27.709711 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.709727 kubelet[2833]: W0129 11:23:27.709723 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.709803 kubelet[2833]: E0129 11:23:27.709753 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.709957 kubelet[2833]: E0129 11:23:27.709942 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.709957 kubelet[2833]: W0129 11:23:27.709953 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.710041 kubelet[2833]: E0129 11:23:27.709987 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.710204 kubelet[2833]: E0129 11:23:27.710190 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.710204 kubelet[2833]: W0129 11:23:27.710201 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.710273 kubelet[2833]: E0129 11:23:27.710230 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.710468 kubelet[2833]: E0129 11:23:27.710453 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.710468 kubelet[2833]: W0129 11:23:27.710467 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.710548 kubelet[2833]: E0129 11:23:27.710497 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.710717 kubelet[2833]: E0129 11:23:27.710702 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.710717 kubelet[2833]: W0129 11:23:27.710713 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.710793 kubelet[2833]: E0129 11:23:27.710730 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.711004 kubelet[2833]: E0129 11:23:27.710988 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.711004 kubelet[2833]: W0129 11:23:27.711000 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.711093 kubelet[2833]: E0129 11:23:27.711039 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.711243 kubelet[2833]: E0129 11:23:27.711229 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.711243 kubelet[2833]: W0129 11:23:27.711240 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.711316 kubelet[2833]: E0129 11:23:27.711291 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.711476 kubelet[2833]: E0129 11:23:27.711462 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.711476 kubelet[2833]: W0129 11:23:27.711473 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.711544 kubelet[2833]: E0129 11:23:27.711501 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.711716 kubelet[2833]: E0129 11:23:27.711701 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.711716 kubelet[2833]: W0129 11:23:27.711712 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.711796 kubelet[2833]: E0129 11:23:27.711741 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.711946 kubelet[2833]: E0129 11:23:27.711931 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.711946 kubelet[2833]: W0129 11:23:27.711942 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.712010 kubelet[2833]: E0129 11:23:27.711969 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.712172 kubelet[2833]: E0129 11:23:27.712158 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.712172 kubelet[2833]: W0129 11:23:27.712169 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.712243 kubelet[2833]: E0129 11:23:27.712186 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.712454 kubelet[2833]: E0129 11:23:27.712428 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.712454 kubelet[2833]: W0129 11:23:27.712440 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.712528 kubelet[2833]: E0129 11:23:27.712457 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.712715 kubelet[2833]: E0129 11:23:27.712700 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.712715 kubelet[2833]: W0129 11:23:27.712711 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.712787 kubelet[2833]: E0129 11:23:27.712730 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.712928 kubelet[2833]: E0129 11:23:27.712910 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.712928 kubelet[2833]: W0129 11:23:27.712923 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.712993 kubelet[2833]: E0129 11:23:27.712938 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.713160 kubelet[2833]: E0129 11:23:27.713143 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.713160 kubelet[2833]: W0129 11:23:27.713154 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.713220 kubelet[2833]: E0129 11:23:27.713166 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.713358 kubelet[2833]: E0129 11:23:27.713335 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:27.713426 kubelet[2833]: E0129 11:23:27.713402 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.713426 kubelet[2833]: W0129 11:23:27.713410 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.713727 kubelet[2833]: E0129 11:23:27.713577 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.713727 kubelet[2833]: E0129 11:23:27.713583 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.713727 kubelet[2833]: W0129 11:23:27.713610 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.713727 kubelet[2833]: E0129 11:23:27.713662 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.713895 kubelet[2833]: E0129 11:23:27.713884 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.713943 kubelet[2833]: W0129 11:23:27.713934 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.714001 kubelet[2833]: E0129 11:23:27.713989 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.714036 containerd[1603]: time="2025-01-29T11:23:27.713966810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65ddcc6dcd-xgpxn,Uid:659fbdea-0af7-4cb8-bb83-31663ca81960,Namespace:calico-system,Attempt:0,}" Jan 29 11:23:27.714758 kubelet[2833]: E0129 11:23:27.714427 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.714758 kubelet[2833]: W0129 11:23:27.714439 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.714758 kubelet[2833]: E0129 11:23:27.714448 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:27.751397 kubelet[2833]: E0129 11:23:27.751363 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:27.751874 containerd[1603]: time="2025-01-29T11:23:27.751833628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dk7q6,Uid:a604ab62-8067-431a-883d-8827e924e33c,Namespace:calico-system,Attempt:0,}" Jan 29 11:23:27.806510 kubelet[2833]: E0129 11:23:27.806484 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:23:27.806510 kubelet[2833]: W0129 11:23:27.806503 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:23:27.806664 kubelet[2833]: E0129 11:23:27.806521 2833 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:23:28.112237 containerd[1603]: time="2025-01-29T11:23:28.112121486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:28.112510 containerd[1603]: time="2025-01-29T11:23:28.112212818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:28.112510 containerd[1603]: time="2025-01-29T11:23:28.112364194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:28.112510 containerd[1603]: time="2025-01-29T11:23:28.112473581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:28.120899 containerd[1603]: time="2025-01-29T11:23:28.120814049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:28.120899 containerd[1603]: time="2025-01-29T11:23:28.120864715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:28.121096 containerd[1603]: time="2025-01-29T11:23:28.120879232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:28.121096 containerd[1603]: time="2025-01-29T11:23:28.120953823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:28.158244 containerd[1603]: time="2025-01-29T11:23:28.158209034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dk7q6,Uid:a604ab62-8067-431a-883d-8827e924e33c,Namespace:calico-system,Attempt:0,} returns sandbox id \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\"" Jan 29 11:23:28.159209 kubelet[2833]: E0129 11:23:28.159180 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:28.160292 containerd[1603]: time="2025-01-29T11:23:28.160232524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:23:28.171872 containerd[1603]: time="2025-01-29T11:23:28.171843847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65ddcc6dcd-xgpxn,Uid:659fbdea-0af7-4cb8-bb83-31663ca81960,Namespace:calico-system,Attempt:0,} returns sandbox id \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\"" Jan 29 11:23:28.172339 kubelet[2833]: E0129 11:23:28.172306 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:29.278521 kubelet[2833]: E0129 11:23:29.278465 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gnqjx" podUID="35088270-b85c-4fff-9f47-df92a059da0a" Jan 29 11:23:29.867323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144498059.mount: Deactivated successfully. Jan 29 11:23:29.939223 containerd[1603]: time="2025-01-29T11:23:29.939174915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:29.939946 containerd[1603]: time="2025-01-29T11:23:29.939905404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 11:23:29.941023 containerd[1603]: time="2025-01-29T11:23:29.940981886Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:29.943491 containerd[1603]: time="2025-01-29T11:23:29.943459752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:29.944122 containerd[1603]: time="2025-01-29T11:23:29.944087326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.783818714s" Jan 29 11:23:29.944184 containerd[1603]: time="2025-01-29T11:23:29.944125929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:23:29.945156 containerd[1603]: time="2025-01-29T11:23:29.945094638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:23:29.946515 containerd[1603]: time="2025-01-29T11:23:29.946477558Z" level=info msg="CreateContainer within sandbox \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:23:29.961273 containerd[1603]: time="2025-01-29T11:23:29.961225111Z" level=info msg="CreateContainer within sandbox \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444\"" Jan 29 11:23:29.961551 containerd[1603]: time="2025-01-29T11:23:29.961512543Z" level=info msg="StartContainer for \"ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444\"" Jan 29 11:23:30.030702 containerd[1603]: time="2025-01-29T11:23:30.030584740Z" level=info msg="StartContainer for \"ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444\" returns successfully" Jan 29 11:23:30.110935 containerd[1603]: time="2025-01-29T11:23:30.110882757Z" level=info msg="shim disconnected" id=ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444 namespace=k8s.io Jan 29 11:23:30.110935 containerd[1603]: time="2025-01-29T11:23:30.110930486Z" level=warning msg="cleaning up after shim disconnected" id=ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444 namespace=k8s.io Jan 29 11:23:30.110935 containerd[1603]: time="2025-01-29T11:23:30.110938421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:23:30.337488 kubelet[2833]: E0129 11:23:30.336870 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:30.849345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444-rootfs.mount: Deactivated successfully. Jan 29 11:23:31.278943 kubelet[2833]: E0129 11:23:31.278779 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gnqjx" podUID="35088270-b85c-4fff-9f47-df92a059da0a" Jan 29 11:23:31.897911 containerd[1603]: time="2025-01-29T11:23:31.897848700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:31.922563 containerd[1603]: time="2025-01-29T11:23:31.922485782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 29 11:23:31.933891 containerd[1603]: time="2025-01-29T11:23:31.933831228Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:31.950807 containerd[1603]: time="2025-01-29T11:23:31.950754192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:31.951516 containerd[1603]: time="2025-01-29T11:23:31.951481573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.00635725s" Jan 29 11:23:31.951516 containerd[1603]: time="2025-01-29T11:23:31.951515778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:23:31.952495 containerd[1603]: time="2025-01-29T11:23:31.952454980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:23:31.959031 containerd[1603]: time="2025-01-29T11:23:31.958989973Z" level=info msg="CreateContainer within sandbox \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:23:32.188474 containerd[1603]: time="2025-01-29T11:23:32.188353054Z" level=info msg="CreateContainer within sandbox \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\"" Jan 29 11:23:32.189008 containerd[1603]: time="2025-01-29T11:23:32.188979575Z" level=info msg="StartContainer for \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\"" Jan 29 11:23:32.331453 containerd[1603]: time="2025-01-29T11:23:32.331356063Z" level=info msg="StartContainer for \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\" returns successfully" Jan 29 11:23:32.342971 kubelet[2833]: E0129 11:23:32.342927 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:33.278585 kubelet[2833]: E0129 11:23:33.278505 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gnqjx" podUID="35088270-b85c-4fff-9f47-df92a059da0a" Jan 29 11:23:33.343858 kubelet[2833]: I0129 11:23:33.343827 2833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:23:33.344413 kubelet[2833]: E0129 11:23:33.344388 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:35.278663 kubelet[2833]: E0129 11:23:35.278604 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gnqjx" podUID="35088270-b85c-4fff-9f47-df92a059da0a" Jan 29 11:23:35.652832 systemd[1]: Started sshd@7-10.0.0.145:22-10.0.0.1:51154.service - OpenSSH per-connection server daemon (10.0.0.1:51154). Jan 29 11:23:36.062457 containerd[1603]: time="2025-01-29T11:23:36.062385727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:36.063392 containerd[1603]: time="2025-01-29T11:23:36.063181074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:23:36.064354 containerd[1603]: time="2025-01-29T11:23:36.064313125Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:36.066472 containerd[1603]: time="2025-01-29T11:23:36.066418470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:36.067044 containerd[1603]: time="2025-01-29T11:23:36.067022858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.114527623s" Jan 29 11:23:36.067098 containerd[1603]: time="2025-01-29T11:23:36.067048887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:23:36.085380 containerd[1603]: time="2025-01-29T11:23:36.085350689Z" level=info msg="CreateContainer within sandbox \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:23:36.094860 sshd[3471]: Accepted publickey for core from 10.0.0.1 port 51154 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:23:36.097001 sshd-session[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:36.099302 containerd[1603]: time="2025-01-29T11:23:36.099264357Z" level=info msg="CreateContainer within sandbox \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5\"" Jan 29 11:23:36.100595 containerd[1603]: time="2025-01-29T11:23:36.100547533Z" level=info msg="StartContainer for \"f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5\"" Jan 29 11:23:36.102283 systemd-logind[1584]: New session 8 of user core. Jan 29 11:23:36.110923 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:23:36.369731 sshd[3494]: Connection closed by 10.0.0.1 port 51154 Jan 29 11:23:36.370073 sshd-session[3471]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:36.373259 systemd[1]: sshd@7-10.0.0.145:22-10.0.0.1:51154.service: Deactivated successfully. Jan 29 11:23:36.376174 systemd-logind[1584]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:23:36.376513 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:23:36.377692 systemd-logind[1584]: Removed session 8. Jan 29 11:23:36.527658 containerd[1603]: time="2025-01-29T11:23:36.527587101Z" level=info msg="StartContainer for \"f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5\" returns successfully" Jan 29 11:23:37.277886 kubelet[2833]: E0129 11:23:37.277825 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gnqjx" podUID="35088270-b85c-4fff-9f47-df92a059da0a" Jan 29 11:23:37.544184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5-rootfs.mount: Deactivated successfully. Jan 29 11:23:37.546474 kubelet[2833]: I0129 11:23:37.546420 2833 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:23:37.546682 kubelet[2833]: E0129 11:23:37.546501 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:37.547253 containerd[1603]: time="2025-01-29T11:23:37.547192023Z" level=info msg="shim disconnected" id=f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5 namespace=k8s.io Jan 29 11:23:37.547253 containerd[1603]: time="2025-01-29T11:23:37.547243740Z" level=warning msg="cleaning up after shim disconnected" id=f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5 namespace=k8s.io Jan 29 11:23:37.547253 containerd[1603]: time="2025-01-29T11:23:37.547252476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:23:37.566075 kubelet[2833]: I0129 11:23:37.565988 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65ddcc6dcd-xgpxn" podStartSLOduration=6.787979518 podStartE2EDuration="10.565971795s" podCreationTimestamp="2025-01-29 11:23:27 +0000 UTC" firstStartedPulling="2025-01-29 11:23:28.174237347 +0000 UTC m=+21.979293452" lastFinishedPulling="2025-01-29 11:23:31.952229624 +0000 UTC m=+25.757285729" observedRunningTime="2025-01-29 11:23:32.426008171 +0000 UTC m=+26.231064276" watchObservedRunningTime="2025-01-29 11:23:37.565971795 +0000 UTC m=+31.371027900" Jan 29 11:23:37.566517 kubelet[2833]: I0129 11:23:37.566486 2833 topology_manager.go:215] "Topology Admit Handler" podUID="254f70df-c108-425a-b324-8fe9c6bfe00e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-q2fch" Jan 29 11:23:37.571570 containerd[1603]: time="2025-01-29T11:23:37.571103305Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:23:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:23:37.572936 kubelet[2833]: I0129 11:23:37.572638 2833 topology_manager.go:215] "Topology Admit Handler" podUID="337f98ac-1b65-4615-aa71-55b1dcfcd61e" podNamespace="calico-apiserver" podName="calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:37.572936 kubelet[2833]: I0129 11:23:37.572881 2833 topology_manager.go:215] "Topology Admit Handler" podUID="530f2b50-c66a-4ebc-869f-eeb1d00efe6c" podNamespace="calico-system" podName="calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:37.573017 kubelet[2833]: I0129 11:23:37.573003 2833 topology_manager.go:215] "Topology Admit Handler" podUID="622451af-befd-4d1a-89be-df128077d7a6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:37.575692 kubelet[2833]: I0129 11:23:37.575603 2833 topology_manager.go:215] "Topology Admit Handler" podUID="10ddf1c0-21b1-4d7e-af9d-b4ca369b7742" podNamespace="calico-apiserver" podName="calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:37.619848 kubelet[2833]: I0129 11:23:37.619803 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/254f70df-c108-425a-b324-8fe9c6bfe00e-config-volume\") pod \"coredns-7db6d8ff4d-q2fch\" (UID: \"254f70df-c108-425a-b324-8fe9c6bfe00e\") " pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:37.620006 kubelet[2833]: I0129 11:23:37.619875 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztt9x\" (UniqueName: \"kubernetes.io/projected/337f98ac-1b65-4615-aa71-55b1dcfcd61e-kube-api-access-ztt9x\") pod \"calico-apiserver-b9f5bc8c-l4mfg\" (UID: \"337f98ac-1b65-4615-aa71-55b1dcfcd61e\") " pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:37.620006 kubelet[2833]: I0129 11:23:37.619925 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p858\" (UniqueName: \"kubernetes.io/projected/10ddf1c0-21b1-4d7e-af9d-b4ca369b7742-kube-api-access-4p858\") pod \"calico-apiserver-b9f5bc8c-kfgmw\" (UID: \"10ddf1c0-21b1-4d7e-af9d-b4ca369b7742\") " pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:37.620006 kubelet[2833]: I0129 11:23:37.619943 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/337f98ac-1b65-4615-aa71-55b1dcfcd61e-calico-apiserver-certs\") pod \"calico-apiserver-b9f5bc8c-l4mfg\" (UID: \"337f98ac-1b65-4615-aa71-55b1dcfcd61e\") " pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:37.620006 kubelet[2833]: I0129 11:23:37.619959 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s4nc\" (UniqueName: \"kubernetes.io/projected/254f70df-c108-425a-b324-8fe9c6bfe00e-kube-api-access-4s4nc\") pod \"coredns-7db6d8ff4d-q2fch\" (UID: \"254f70df-c108-425a-b324-8fe9c6bfe00e\") " pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:37.620113 kubelet[2833]: I0129 11:23:37.620027 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/622451af-befd-4d1a-89be-df128077d7a6-config-volume\") pod \"coredns-7db6d8ff4d-8qdw4\" (UID: \"622451af-befd-4d1a-89be-df128077d7a6\") " pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:37.620113 kubelet[2833]: I0129 11:23:37.620084 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tts8r\" (UniqueName: \"kubernetes.io/projected/622451af-befd-4d1a-89be-df128077d7a6-kube-api-access-tts8r\") pod \"coredns-7db6d8ff4d-8qdw4\" (UID: \"622451af-befd-4d1a-89be-df128077d7a6\") " pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:37.620113 kubelet[2833]: I0129 11:23:37.620104 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/10ddf1c0-21b1-4d7e-af9d-b4ca369b7742-calico-apiserver-certs\") pod \"calico-apiserver-b9f5bc8c-kfgmw\" (UID: \"10ddf1c0-21b1-4d7e-af9d-b4ca369b7742\") " pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:37.620189 kubelet[2833]: I0129 11:23:37.620130 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/530f2b50-c66a-4ebc-869f-eeb1d00efe6c-tigera-ca-bundle\") pod \"calico-kube-controllers-695974dcd7-g2c9b\" (UID: \"530f2b50-c66a-4ebc-869f-eeb1d00efe6c\") " pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:37.620189 kubelet[2833]: I0129 11:23:37.620146 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqqdz\" (UniqueName: \"kubernetes.io/projected/530f2b50-c66a-4ebc-869f-eeb1d00efe6c-kube-api-access-cqqdz\") pod \"calico-kube-controllers-695974dcd7-g2c9b\" (UID: \"530f2b50-c66a-4ebc-869f-eeb1d00efe6c\") " pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:37.873108 kubelet[2833]: E0129 11:23:37.873045 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:37.873983 containerd[1603]: time="2025-01-29T11:23:37.873895606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:0,}" Jan 29 11:23:37.883319 containerd[1603]: time="2025-01-29T11:23:37.883278830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:0,}" Jan 29 11:23:37.886063 containerd[1603]: time="2025-01-29T11:23:37.885992759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:23:37.886372 containerd[1603]: time="2025-01-29T11:23:37.886335124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:23:37.887551 kubelet[2833]: E0129 11:23:37.887530 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:37.887946 containerd[1603]: time="2025-01-29T11:23:37.887907253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:0,}" Jan 29 11:23:37.971617 containerd[1603]: time="2025-01-29T11:23:37.971567692Z" level=error msg="Failed to destroy network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:37.972020 containerd[1603]: time="2025-01-29T11:23:37.971988634Z" level=error msg="encountered an error cleaning up failed sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:37.972061 containerd[1603]: time="2025-01-29T11:23:37.972047244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:37.972345 kubelet[2833]: E0129 11:23:37.972288 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:37.972394 kubelet[2833]: E0129 11:23:37.972374 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:37.972421 kubelet[2833]: E0129 11:23:37.972402 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:37.972496 kubelet[2833]: E0129 11:23:37.972464 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q2fch_kube-system(254f70df-c108-425a-b324-8fe9c6bfe00e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q2fch_kube-system(254f70df-c108-425a-b324-8fe9c6bfe00e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q2fch" podUID="254f70df-c108-425a-b324-8fe9c6bfe00e" Jan 29 11:23:38.134870 containerd[1603]: time="2025-01-29T11:23:38.134620386Z" level=error msg="Failed to destroy network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.135702 containerd[1603]: time="2025-01-29T11:23:38.135147839Z" level=error msg="Failed to destroy network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.136054 containerd[1603]: time="2025-01-29T11:23:38.135897760Z" level=error msg="encountered an error cleaning up failed sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.136054 containerd[1603]: time="2025-01-29T11:23:38.135956982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.136211 kubelet[2833]: E0129 11:23:38.136172 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.136302 kubelet[2833]: E0129 11:23:38.136230 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:38.136302 kubelet[2833]: E0129 11:23:38.136249 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:38.136302 kubelet[2833]: E0129 11:23:38.136284 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-695974dcd7-g2c9b_calico-system(530f2b50-c66a-4ebc-869f-eeb1d00efe6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-695974dcd7-g2c9b_calico-system(530f2b50-c66a-4ebc-869f-eeb1d00efe6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" podUID="530f2b50-c66a-4ebc-869f-eeb1d00efe6c" Jan 29 11:23:38.136572 containerd[1603]: time="2025-01-29T11:23:38.136316138Z" level=error msg="encountered an error cleaning up failed sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.136572 containerd[1603]: time="2025-01-29T11:23:38.136345443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.137447 kubelet[2833]: E0129 11:23:38.137268 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.137447 kubelet[2833]: E0129 11:23:38.137329 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:38.137447 kubelet[2833]: E0129 11:23:38.137349 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:38.137545 kubelet[2833]: E0129 11:23:38.137387 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9f5bc8c-l4mfg_calico-apiserver(337f98ac-1b65-4615-aa71-55b1dcfcd61e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9f5bc8c-l4mfg_calico-apiserver(337f98ac-1b65-4615-aa71-55b1dcfcd61e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" podUID="337f98ac-1b65-4615-aa71-55b1dcfcd61e" Jan 29 11:23:38.142138 containerd[1603]: time="2025-01-29T11:23:38.142102077Z" level=error msg="Failed to destroy network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.142510 containerd[1603]: time="2025-01-29T11:23:38.142485900Z" level=error msg="encountered an error cleaning up failed sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.142554 containerd[1603]: time="2025-01-29T11:23:38.142537236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.142787 kubelet[2833]: E0129 11:23:38.142751 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.142838 kubelet[2833]: E0129 11:23:38.142813 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:38.142871 kubelet[2833]: E0129 11:23:38.142840 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:38.142911 kubelet[2833]: E0129 11:23:38.142887 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8qdw4_kube-system(622451af-befd-4d1a-89be-df128077d7a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8qdw4_kube-system(622451af-befd-4d1a-89be-df128077d7a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8qdw4" podUID="622451af-befd-4d1a-89be-df128077d7a6" Jan 29 11:23:38.146240 containerd[1603]: time="2025-01-29T11:23:38.146175623Z" level=error msg="Failed to destroy network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.146595 containerd[1603]: time="2025-01-29T11:23:38.146570296Z" level=error msg="encountered an error cleaning up failed sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.146659 containerd[1603]: time="2025-01-29T11:23:38.146619809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.146917 kubelet[2833]: E0129 11:23:38.146876 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.146961 kubelet[2833]: E0129 11:23:38.146931 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:38.146961 kubelet[2833]: E0129 11:23:38.146953 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:38.147022 kubelet[2833]: E0129 11:23:38.146991 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9f5bc8c-kfgmw_calico-apiserver(10ddf1c0-21b1-4d7e-af9d-b4ca369b7742)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9f5bc8c-kfgmw_calico-apiserver(10ddf1c0-21b1-4d7e-af9d-b4ca369b7742)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" podUID="10ddf1c0-21b1-4d7e-af9d-b4ca369b7742" Jan 29 11:23:38.548705 kubelet[2833]: I0129 11:23:38.548598 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb" Jan 29 11:23:38.549346 containerd[1603]: time="2025-01-29T11:23:38.549308063Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\"" Jan 29 11:23:38.549750 containerd[1603]: time="2025-01-29T11:23:38.549497059Z" level=info msg="Ensure that sandbox 338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb in task-service has been cleanup successfully" Jan 29 11:23:38.550470 containerd[1603]: time="2025-01-29T11:23:38.550291815Z" level=info msg="TearDown network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" successfully" Jan 29 11:23:38.550470 containerd[1603]: time="2025-01-29T11:23:38.550307234Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" returns successfully" Jan 29 11:23:38.551476 containerd[1603]: time="2025-01-29T11:23:38.550863722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:1,}" Jan 29 11:23:38.552034 kubelet[2833]: E0129 11:23:38.552001 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:38.552047 systemd[1]: run-netns-cni\x2d92d3ae5a\x2d594e\x2d1744\x2d1181\x2d193919386b6d.mount: Deactivated successfully. Jan 29 11:23:38.553332 containerd[1603]: time="2025-01-29T11:23:38.553101513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:23:38.553420 kubelet[2833]: I0129 11:23:38.553202 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7" Jan 29 11:23:38.553662 containerd[1603]: time="2025-01-29T11:23:38.553594901Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\"" Jan 29 11:23:38.553842 containerd[1603]: time="2025-01-29T11:23:38.553816207Z" level=info msg="Ensure that sandbox 35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7 in task-service has been cleanup successfully" Jan 29 11:23:38.554617 containerd[1603]: time="2025-01-29T11:23:38.554479707Z" level=info msg="TearDown network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" successfully" Jan 29 11:23:38.554617 containerd[1603]: time="2025-01-29T11:23:38.554499584Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" returns successfully" Jan 29 11:23:38.554887 kubelet[2833]: E0129 11:23:38.554750 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:38.555093 containerd[1603]: time="2025-01-29T11:23:38.554995197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:1,}" Jan 29 11:23:38.556632 kubelet[2833]: I0129 11:23:38.556548 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc" Jan 29 11:23:38.557362 systemd[1]: run-netns-cni\x2d0d1a33e8\x2d5cce\x2d8b45\x2d8443\x2d83027354e10c.mount: Deactivated successfully. Jan 29 11:23:38.558206 containerd[1603]: time="2025-01-29T11:23:38.558118505Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\"" Jan 29 11:23:38.558436 kubelet[2833]: I0129 11:23:38.558089 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40" Jan 29 11:23:38.558711 containerd[1603]: time="2025-01-29T11:23:38.558691453Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\"" Jan 29 11:23:38.558881 containerd[1603]: time="2025-01-29T11:23:38.558700781Z" level=info msg="Ensure that sandbox 1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc in task-service has been cleanup successfully" Jan 29 11:23:38.558915 containerd[1603]: time="2025-01-29T11:23:38.558875920Z" level=info msg="Ensure that sandbox 5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40 in task-service has been cleanup successfully" Jan 29 11:23:38.559995 containerd[1603]: time="2025-01-29T11:23:38.559014912Z" level=info msg="TearDown network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" successfully" Jan 29 11:23:38.559995 containerd[1603]: time="2025-01-29T11:23:38.559029990Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" returns successfully" Jan 29 11:23:38.559995 containerd[1603]: time="2025-01-29T11:23:38.559752781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:23:38.560121 kubelet[2833]: I0129 11:23:38.559312 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1" Jan 29 11:23:38.561378 containerd[1603]: time="2025-01-29T11:23:38.560771498Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\"" Jan 29 11:23:38.561079 systemd[1]: run-netns-cni\x2d58fdd493\x2da950\x2dad33\x2ddfa4\x2dcec0c32ab430.mount: Deactivated successfully. Jan 29 11:23:38.561238 systemd[1]: run-netns-cni\x2de3b59b17\x2df70f\x2de8ea\x2d603e\x2ddaa88255d46f.mount: Deactivated successfully. Jan 29 11:23:38.561518 containerd[1603]: time="2025-01-29T11:23:38.561488027Z" level=info msg="TearDown network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" successfully" Jan 29 11:23:38.561518 containerd[1603]: time="2025-01-29T11:23:38.561502274Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" returns successfully" Jan 29 11:23:38.561784 containerd[1603]: time="2025-01-29T11:23:38.561758486Z" level=info msg="Ensure that sandbox cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1 in task-service has been cleanup successfully" Jan 29 11:23:38.562787 kubelet[2833]: E0129 11:23:38.562145 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:38.563379 containerd[1603]: time="2025-01-29T11:23:38.562288252Z" level=info msg="TearDown network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" successfully" Jan 29 11:23:38.563379 containerd[1603]: time="2025-01-29T11:23:38.562303291Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" returns successfully" Jan 29 11:23:38.563379 containerd[1603]: time="2025-01-29T11:23:38.562687725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:1,}" Jan 29 11:23:38.563379 containerd[1603]: time="2025-01-29T11:23:38.563186814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:23:38.566716 systemd[1]: run-netns-cni\x2d41a9c8c0\x2df2d4\x2d2e12\x2d2667\x2d325f56870d7b.mount: Deactivated successfully. Jan 29 11:23:38.701526 containerd[1603]: time="2025-01-29T11:23:38.701385920Z" level=error msg="Failed to destroy network for sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.702043 containerd[1603]: time="2025-01-29T11:23:38.702023920Z" level=error msg="encountered an error cleaning up failed sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.702157 containerd[1603]: time="2025-01-29T11:23:38.702140640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.702809 kubelet[2833]: E0129 11:23:38.702451 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.702809 kubelet[2833]: E0129 11:23:38.702509 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:38.702809 kubelet[2833]: E0129 11:23:38.702529 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:38.702921 kubelet[2833]: E0129 11:23:38.702570 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q2fch_kube-system(254f70df-c108-425a-b324-8fe9c6bfe00e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q2fch_kube-system(254f70df-c108-425a-b324-8fe9c6bfe00e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q2fch" podUID="254f70df-c108-425a-b324-8fe9c6bfe00e" Jan 29 11:23:38.704826 containerd[1603]: time="2025-01-29T11:23:38.704704325Z" level=error msg="Failed to destroy network for sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.705627 containerd[1603]: time="2025-01-29T11:23:38.705587266Z" level=error msg="encountered an error cleaning up failed sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.705940 containerd[1603]: time="2025-01-29T11:23:38.705894244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.706221 kubelet[2833]: E0129 11:23:38.706189 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.706782 kubelet[2833]: E0129 11:23:38.706337 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:38.706782 kubelet[2833]: E0129 11:23:38.706380 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:38.706782 kubelet[2833]: E0129 11:23:38.706440 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8qdw4_kube-system(622451af-befd-4d1a-89be-df128077d7a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8qdw4_kube-system(622451af-befd-4d1a-89be-df128077d7a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8qdw4" podUID="622451af-befd-4d1a-89be-df128077d7a6" Jan 29 11:23:38.708614 containerd[1603]: time="2025-01-29T11:23:38.708519415Z" level=error msg="Failed to destroy network for sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.708937 containerd[1603]: time="2025-01-29T11:23:38.708909469Z" level=error msg="encountered an error cleaning up failed sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.709383 containerd[1603]: time="2025-01-29T11:23:38.709073968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.709383 containerd[1603]: time="2025-01-29T11:23:38.709232356Z" level=error msg="Failed to destroy network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.709500 kubelet[2833]: E0129 11:23:38.709347 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.709500 kubelet[2833]: E0129 11:23:38.709393 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:38.709500 kubelet[2833]: E0129 11:23:38.709413 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:38.709592 kubelet[2833]: E0129 11:23:38.709451 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9f5bc8c-kfgmw_calico-apiserver(10ddf1c0-21b1-4d7e-af9d-b4ca369b7742)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9f5bc8c-kfgmw_calico-apiserver(10ddf1c0-21b1-4d7e-af9d-b4ca369b7742)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" podUID="10ddf1c0-21b1-4d7e-af9d-b4ca369b7742" Jan 29 11:23:38.709659 containerd[1603]: time="2025-01-29T11:23:38.709572387Z" level=error msg="encountered an error cleaning up failed sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.709683 containerd[1603]: time="2025-01-29T11:23:38.709631177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.709895 kubelet[2833]: E0129 11:23:38.709873 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.709947 kubelet[2833]: E0129 11:23:38.709910 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:38.709947 kubelet[2833]: E0129 11:23:38.709929 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:38.710011 kubelet[2833]: E0129 11:23:38.709959 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-695974dcd7-g2c9b_calico-system(530f2b50-c66a-4ebc-869f-eeb1d00efe6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-695974dcd7-g2c9b_calico-system(530f2b50-c66a-4ebc-869f-eeb1d00efe6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" podUID="530f2b50-c66a-4ebc-869f-eeb1d00efe6c" Jan 29 11:23:38.714620 containerd[1603]: time="2025-01-29T11:23:38.714581414Z" level=error msg="Failed to destroy network for sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.714922 containerd[1603]: time="2025-01-29T11:23:38.714895945Z" level=error msg="encountered an error cleaning up failed sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.714958 containerd[1603]: time="2025-01-29T11:23:38.714932043Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.715110 kubelet[2833]: E0129 11:23:38.715072 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:38.715110 kubelet[2833]: E0129 11:23:38.715105 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:38.715181 kubelet[2833]: E0129 11:23:38.715122 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:38.715181 kubelet[2833]: E0129 11:23:38.715153 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9f5bc8c-l4mfg_calico-apiserver(337f98ac-1b65-4615-aa71-55b1dcfcd61e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9f5bc8c-l4mfg_calico-apiserver(337f98ac-1b65-4615-aa71-55b1dcfcd61e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" podUID="337f98ac-1b65-4615-aa71-55b1dcfcd61e" Jan 29 11:23:39.281762 containerd[1603]: time="2025-01-29T11:23:39.281721415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnqjx,Uid:35088270-b85c-4fff-9f47-df92a059da0a,Namespace:calico-system,Attempt:0,}" Jan 29 11:23:39.337549 containerd[1603]: time="2025-01-29T11:23:39.337505459Z" level=error msg="Failed to destroy network for sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.337899 containerd[1603]: time="2025-01-29T11:23:39.337873832Z" level=error msg="encountered an error cleaning up failed sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.337938 containerd[1603]: time="2025-01-29T11:23:39.337922523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnqjx,Uid:35088270-b85c-4fff-9f47-df92a059da0a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.338208 kubelet[2833]: E0129 11:23:39.338150 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.338208 kubelet[2833]: E0129 11:23:39.338211 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:39.338368 kubelet[2833]: E0129 11:23:39.338229 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:39.338368 kubelet[2833]: E0129 11:23:39.338270 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gnqjx_calico-system(35088270-b85c-4fff-9f47-df92a059da0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gnqjx_calico-system(35088270-b85c-4fff-9f47-df92a059da0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gnqjx" podUID="35088270-b85c-4fff-9f47-df92a059da0a" Jan 29 11:23:39.561940 kubelet[2833]: I0129 11:23:39.561798 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e" Jan 29 11:23:39.562442 containerd[1603]: time="2025-01-29T11:23:39.562409262Z" level=info msg="StopPodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\"" Jan 29 11:23:39.563596 containerd[1603]: time="2025-01-29T11:23:39.562595142Z" level=info msg="Ensure that sandbox 8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e in task-service has been cleanup successfully" Jan 29 11:23:39.563596 containerd[1603]: time="2025-01-29T11:23:39.562835594Z" level=info msg="TearDown network for sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" successfully" Jan 29 11:23:39.563596 containerd[1603]: time="2025-01-29T11:23:39.562848147Z" level=info msg="StopPodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" returns successfully" Jan 29 11:23:39.563596 containerd[1603]: time="2025-01-29T11:23:39.563547293Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\"" Jan 29 11:23:39.564364 kubelet[2833]: I0129 11:23:39.563851 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6" Jan 29 11:23:39.565068 systemd[1]: run-netns-cni\x2d62f024bc\x2df555\x2da6a6\x2deef4\x2dc58e012c0855.mount: Deactivated successfully. Jan 29 11:23:39.566234 kubelet[2833]: I0129 11:23:39.566186 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7" Jan 29 11:23:39.567014 containerd[1603]: time="2025-01-29T11:23:39.563618497Z" level=info msg="TearDown network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" successfully" Jan 29 11:23:39.567014 containerd[1603]: time="2025-01-29T11:23:39.566993066Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" returns successfully" Jan 29 11:23:39.567113 containerd[1603]: time="2025-01-29T11:23:39.564239045Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\"" Jan 29 11:23:39.567200 containerd[1603]: time="2025-01-29T11:23:39.566897928Z" level=info msg="StopPodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\"" Jan 29 11:23:39.567200 containerd[1603]: time="2025-01-29T11:23:39.567169349Z" level=info msg="Ensure that sandbox 783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6 in task-service has been cleanup successfully" Jan 29 11:23:39.567317 kubelet[2833]: I0129 11:23:39.567152 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa" Jan 29 11:23:39.567375 containerd[1603]: time="2025-01-29T11:23:39.567329851Z" level=info msg="TearDown network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" successfully" Jan 29 11:23:39.567375 containerd[1603]: time="2025-01-29T11:23:39.567343006Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" returns successfully" Jan 29 11:23:39.567564 containerd[1603]: time="2025-01-29T11:23:39.567546779Z" level=info msg="StopPodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\"" Jan 29 11:23:39.567798 containerd[1603]: time="2025-01-29T11:23:39.567678858Z" level=info msg="Ensure that sandbox f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa in task-service has been cleanup successfully" Jan 29 11:23:39.567925 containerd[1603]: time="2025-01-29T11:23:39.567910042Z" level=info msg="Ensure that sandbox a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7 in task-service has been cleanup successfully" Jan 29 11:23:39.567994 containerd[1603]: time="2025-01-29T11:23:39.567973372Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\"" Jan 29 11:23:39.568023 containerd[1603]: time="2025-01-29T11:23:39.567924319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:23:39.568372 containerd[1603]: time="2025-01-29T11:23:39.568199467Z" level=info msg="TearDown network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" successfully" Jan 29 11:23:39.568372 containerd[1603]: time="2025-01-29T11:23:39.568217370Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" returns successfully" Jan 29 11:23:39.568372 containerd[1603]: time="2025-01-29T11:23:39.568200729Z" level=info msg="TearDown network for sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" successfully" Jan 29 11:23:39.568372 containerd[1603]: time="2025-01-29T11:23:39.568246906Z" level=info msg="StopPodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" returns successfully" Jan 29 11:23:39.568474 containerd[1603]: time="2025-01-29T11:23:39.568382080Z" level=info msg="TearDown network for sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" successfully" Jan 29 11:23:39.568474 containerd[1603]: time="2025-01-29T11:23:39.568393722Z" level=info msg="StopPodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" returns successfully" Jan 29 11:23:39.568895 containerd[1603]: time="2025-01-29T11:23:39.568606654Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\"" Jan 29 11:23:39.568895 containerd[1603]: time="2025-01-29T11:23:39.568689489Z" level=info msg="TearDown network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" successfully" Jan 29 11:23:39.568895 containerd[1603]: time="2025-01-29T11:23:39.568718824Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" returns successfully" Jan 29 11:23:39.568895 containerd[1603]: time="2025-01-29T11:23:39.568765322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:2,}" Jan 29 11:23:39.568895 containerd[1603]: time="2025-01-29T11:23:39.568868185Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\"" Jan 29 11:23:39.569446 containerd[1603]: time="2025-01-29T11:23:39.568932617Z" level=info msg="TearDown network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" successfully" Jan 29 11:23:39.569446 containerd[1603]: time="2025-01-29T11:23:39.568941133Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" returns successfully" Jan 29 11:23:39.569446 containerd[1603]: time="2025-01-29T11:23:39.569310297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:2,}" Jan 29 11:23:39.569446 containerd[1603]: time="2025-01-29T11:23:39.569423651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:23:39.569546 kubelet[2833]: E0129 11:23:39.569104 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:39.569546 kubelet[2833]: I0129 11:23:39.569199 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b" Jan 29 11:23:39.569753 containerd[1603]: time="2025-01-29T11:23:39.569724126Z" level=info msg="StopPodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\"" Jan 29 11:23:39.570405 containerd[1603]: time="2025-01-29T11:23:39.570347749Z" level=info msg="Ensure that sandbox 2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b in task-service has been cleanup successfully" Jan 29 11:23:39.570893 containerd[1603]: time="2025-01-29T11:23:39.570767188Z" level=info msg="TearDown network for sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" successfully" Jan 29 11:23:39.570893 containerd[1603]: time="2025-01-29T11:23:39.570781545Z" level=info msg="StopPodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" returns successfully" Jan 29 11:23:39.570948 kubelet[2833]: I0129 11:23:39.570778 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43" Jan 29 11:23:39.571035 systemd[1]: run-netns-cni\x2d718d1a32\x2d5d90\x2d90ad\x2d9129\x2d0f457f6e7a10.mount: Deactivated successfully. Jan 29 11:23:39.571562 containerd[1603]: time="2025-01-29T11:23:39.571043579Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\"" Jan 29 11:23:39.571562 containerd[1603]: time="2025-01-29T11:23:39.571111666Z" level=info msg="TearDown network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" successfully" Jan 29 11:23:39.571562 containerd[1603]: time="2025-01-29T11:23:39.571120553Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" returns successfully" Jan 29 11:23:39.571562 containerd[1603]: time="2025-01-29T11:23:39.571195184Z" level=info msg="StopPodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\"" Jan 29 11:23:39.571562 containerd[1603]: time="2025-01-29T11:23:39.571317544Z" level=info msg="Ensure that sandbox 0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43 in task-service has been cleanup successfully" Jan 29 11:23:39.571562 containerd[1603]: time="2025-01-29T11:23:39.571444733Z" level=info msg="TearDown network for sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" successfully" Jan 29 11:23:39.571562 containerd[1603]: time="2025-01-29T11:23:39.571455022Z" level=info msg="StopPodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" returns successfully" Jan 29 11:23:39.571913 kubelet[2833]: E0129 11:23:39.571514 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:39.571231 systemd[1]: run-netns-cni\x2d6d0c09d7\x2d6ac5\x2d8df9\x2d5977\x2d55aef7a06864.mount: Deactivated successfully. Jan 29 11:23:39.572007 containerd[1603]: time="2025-01-29T11:23:39.571786136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnqjx,Uid:35088270-b85c-4fff-9f47-df92a059da0a,Namespace:calico-system,Attempt:1,}" Jan 29 11:23:39.572007 containerd[1603]: time="2025-01-29T11:23:39.571924516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:2,}" Jan 29 11:23:39.571397 systemd[1]: run-netns-cni\x2ddb51fcb5\x2dd78e\x2d6aee\x2d0f10\x2d28d93ceeab61.mount: Deactivated successfully. Jan 29 11:23:39.575051 systemd[1]: run-netns-cni\x2dc49f80fa\x2d0315\x2da875\x2d2d4f\x2dac470f3add3e.mount: Deactivated successfully. Jan 29 11:23:39.575201 systemd[1]: run-netns-cni\x2d95c0f34b\x2d6d91\x2d3ee7\x2d1e1e\x2dfa397cd56dde.mount: Deactivated successfully. Jan 29 11:23:39.758220 containerd[1603]: time="2025-01-29T11:23:39.758167915Z" level=error msg="Failed to destroy network for sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.759006 containerd[1603]: time="2025-01-29T11:23:39.758896936Z" level=error msg="encountered an error cleaning up failed sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.759006 containerd[1603]: time="2025-01-29T11:23:39.758958172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.759414 kubelet[2833]: E0129 11:23:39.759376 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.759472 kubelet[2833]: E0129 11:23:39.759439 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:39.759505 kubelet[2833]: E0129 11:23:39.759468 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:39.759546 kubelet[2833]: E0129 11:23:39.759514 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9f5bc8c-l4mfg_calico-apiserver(337f98ac-1b65-4615-aa71-55b1dcfcd61e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9f5bc8c-l4mfg_calico-apiserver(337f98ac-1b65-4615-aa71-55b1dcfcd61e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" podUID="337f98ac-1b65-4615-aa71-55b1dcfcd61e" Jan 29 11:23:39.771883 containerd[1603]: time="2025-01-29T11:23:39.771717871Z" level=error msg="Failed to destroy network for sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.772338 containerd[1603]: time="2025-01-29T11:23:39.772311108Z" level=error msg="encountered an error cleaning up failed sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.772474 containerd[1603]: time="2025-01-29T11:23:39.772448115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.772787 kubelet[2833]: E0129 11:23:39.772753 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.772942 kubelet[2833]: E0129 11:23:39.772915 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:39.773826 kubelet[2833]: E0129 11:23:39.773007 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:39.773826 kubelet[2833]: E0129 11:23:39.773062 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8qdw4_kube-system(622451af-befd-4d1a-89be-df128077d7a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8qdw4_kube-system(622451af-befd-4d1a-89be-df128077d7a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8qdw4" podUID="622451af-befd-4d1a-89be-df128077d7a6" Jan 29 11:23:39.778489 containerd[1603]: time="2025-01-29T11:23:39.778443797Z" level=error msg="Failed to destroy network for sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.779198 containerd[1603]: time="2025-01-29T11:23:39.779059375Z" level=error msg="encountered an error cleaning up failed sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.779198 containerd[1603]: time="2025-01-29T11:23:39.779118566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.779557 kubelet[2833]: E0129 11:23:39.779522 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.779731 kubelet[2833]: E0129 11:23:39.779708 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:39.779871 kubelet[2833]: E0129 11:23:39.779824 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:39.780235 kubelet[2833]: E0129 11:23:39.780192 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-695974dcd7-g2c9b_calico-system(530f2b50-c66a-4ebc-869f-eeb1d00efe6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-695974dcd7-g2c9b_calico-system(530f2b50-c66a-4ebc-869f-eeb1d00efe6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" podUID="530f2b50-c66a-4ebc-869f-eeb1d00efe6c" Jan 29 11:23:39.787948 containerd[1603]: time="2025-01-29T11:23:39.787889920Z" level=error msg="Failed to destroy network for sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.788568 containerd[1603]: time="2025-01-29T11:23:39.788528622Z" level=error msg="encountered an error cleaning up failed sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.788744 containerd[1603]: time="2025-01-29T11:23:39.788606169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.788945 kubelet[2833]: E0129 11:23:39.788894 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.789019 kubelet[2833]: E0129 11:23:39.788965 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:39.789019 kubelet[2833]: E0129 11:23:39.788985 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:39.789081 kubelet[2833]: E0129 11:23:39.789020 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q2fch_kube-system(254f70df-c108-425a-b324-8fe9c6bfe00e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q2fch_kube-system(254f70df-c108-425a-b324-8fe9c6bfe00e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q2fch" podUID="254f70df-c108-425a-b324-8fe9c6bfe00e" Jan 29 11:23:39.795862 containerd[1603]: time="2025-01-29T11:23:39.795805946Z" level=error msg="Failed to destroy network for sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.796260 containerd[1603]: time="2025-01-29T11:23:39.796229743Z" level=error msg="encountered an error cleaning up failed sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.796323 containerd[1603]: time="2025-01-29T11:23:39.796295697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnqjx,Uid:35088270-b85c-4fff-9f47-df92a059da0a,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.796502 kubelet[2833]: E0129 11:23:39.796474 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.796586 kubelet[2833]: E0129 11:23:39.796513 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:39.796586 kubelet[2833]: E0129 11:23:39.796531 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:39.796586 kubelet[2833]: E0129 11:23:39.796564 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gnqjx_calico-system(35088270-b85c-4fff-9f47-df92a059da0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gnqjx_calico-system(35088270-b85c-4fff-9f47-df92a059da0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gnqjx" podUID="35088270-b85c-4fff-9f47-df92a059da0a" Jan 29 11:23:39.803714 containerd[1603]: time="2025-01-29T11:23:39.803673759Z" level=error msg="Failed to destroy network for sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.804054 containerd[1603]: time="2025-01-29T11:23:39.804025812Z" level=error msg="encountered an error cleaning up failed sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.804096 containerd[1603]: time="2025-01-29T11:23:39.804075516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.804271 kubelet[2833]: E0129 11:23:39.804238 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:39.804271 kubelet[2833]: E0129 11:23:39.804267 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:39.804337 kubelet[2833]: E0129 11:23:39.804282 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:39.804337 kubelet[2833]: E0129 11:23:39.804313 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9f5bc8c-kfgmw_calico-apiserver(10ddf1c0-21b1-4d7e-af9d-b4ca369b7742)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9f5bc8c-kfgmw_calico-apiserver(10ddf1c0-21b1-4d7e-af9d-b4ca369b7742)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" podUID="10ddf1c0-21b1-4d7e-af9d-b4ca369b7742" Jan 29 11:23:40.546900 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3-shm.mount: Deactivated successfully. Jan 29 11:23:40.574108 kubelet[2833]: I0129 11:23:40.574074 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a" Jan 29 11:23:40.575048 containerd[1603]: time="2025-01-29T11:23:40.574741942Z" level=info msg="StopPodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\"" Jan 29 11:23:40.575048 containerd[1603]: time="2025-01-29T11:23:40.574954401Z" level=info msg="Ensure that sandbox a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a in task-service has been cleanup successfully" Jan 29 11:23:40.575619 containerd[1603]: time="2025-01-29T11:23:40.575131464Z" level=info msg="TearDown network for sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" successfully" Jan 29 11:23:40.575619 containerd[1603]: time="2025-01-29T11:23:40.575144028Z" level=info msg="StopPodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" returns successfully" Jan 29 11:23:40.575619 containerd[1603]: time="2025-01-29T11:23:40.575400721Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\"" Jan 29 11:23:40.575619 containerd[1603]: time="2025-01-29T11:23:40.575470863Z" level=info msg="TearDown network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" successfully" Jan 29 11:23:40.575619 containerd[1603]: time="2025-01-29T11:23:40.575479910Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" returns successfully" Jan 29 11:23:40.576434 containerd[1603]: time="2025-01-29T11:23:40.575929025Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\"" Jan 29 11:23:40.576434 containerd[1603]: time="2025-01-29T11:23:40.576012211Z" level=info msg="TearDown network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" successfully" Jan 29 11:23:40.576434 containerd[1603]: time="2025-01-29T11:23:40.576021358Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" returns successfully" Jan 29 11:23:40.576530 containerd[1603]: time="2025-01-29T11:23:40.576495871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:3,}" Jan 29 11:23:40.576559 kubelet[2833]: I0129 11:23:40.576435 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3" Jan 29 11:23:40.577691 containerd[1603]: time="2025-01-29T11:23:40.577001542Z" level=info msg="StopPodSandbox for \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\"" Jan 29 11:23:40.577830 systemd[1]: run-netns-cni\x2d7d52f677\x2dac6b\x2d4c4f\x2dd580\x2d01ef980e8fe8.mount: Deactivated successfully. Jan 29 11:23:40.578150 containerd[1603]: time="2025-01-29T11:23:40.577993228Z" level=info msg="Ensure that sandbox fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3 in task-service has been cleanup successfully" Jan 29 11:23:40.578198 containerd[1603]: time="2025-01-29T11:23:40.578168728Z" level=info msg="TearDown network for sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\" successfully" Jan 29 11:23:40.578198 containerd[1603]: time="2025-01-29T11:23:40.578181643Z" level=info msg="StopPodSandbox for \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\" returns successfully" Jan 29 11:23:40.579566 kubelet[2833]: I0129 11:23:40.579540 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629" Jan 29 11:23:40.580174 containerd[1603]: time="2025-01-29T11:23:40.580046942Z" level=info msg="StopPodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\"" Jan 29 11:23:40.580174 containerd[1603]: time="2025-01-29T11:23:40.580085083Z" level=info msg="StopPodSandbox for \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\"" Jan 29 11:23:40.580174 containerd[1603]: time="2025-01-29T11:23:40.580123645Z" level=info msg="TearDown network for sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" successfully" Jan 29 11:23:40.580174 containerd[1603]: time="2025-01-29T11:23:40.580133535Z" level=info msg="StopPodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" returns successfully" Jan 29 11:23:40.580301 containerd[1603]: time="2025-01-29T11:23:40.580278307Z" level=info msg="Ensure that sandbox d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629 in task-service has been cleanup successfully" Jan 29 11:23:40.580750 containerd[1603]: time="2025-01-29T11:23:40.580726360Z" level=info msg="TearDown network for sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\" successfully" Jan 29 11:23:40.580876 containerd[1603]: time="2025-01-29T11:23:40.580842238Z" level=info msg="StopPodSandbox for \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\" returns successfully" Jan 29 11:23:40.581273 containerd[1603]: time="2025-01-29T11:23:40.581008220Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\"" Jan 29 11:23:40.581273 containerd[1603]: time="2025-01-29T11:23:40.581079714Z" level=info msg="TearDown network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" successfully" Jan 29 11:23:40.581273 containerd[1603]: time="2025-01-29T11:23:40.581088250Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" returns successfully" Jan 29 11:23:40.581089 systemd[1]: run-netns-cni\x2d80094396\x2de01e\x2d1a78\x2d37db\x2d12b4150c383b.mount: Deactivated successfully. Jan 29 11:23:40.581901 containerd[1603]: time="2025-01-29T11:23:40.581665406Z" level=info msg="StopPodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\"" Jan 29 11:23:40.581901 containerd[1603]: time="2025-01-29T11:23:40.581696104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:23:40.581901 containerd[1603]: time="2025-01-29T11:23:40.581756668Z" level=info msg="TearDown network for sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" successfully" Jan 29 11:23:40.581901 containerd[1603]: time="2025-01-29T11:23:40.581767639Z" level=info msg="StopPodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" returns successfully" Jan 29 11:23:40.582612 containerd[1603]: time="2025-01-29T11:23:40.582291193Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\"" Jan 29 11:23:40.582612 containerd[1603]: time="2025-01-29T11:23:40.582408494Z" level=info msg="TearDown network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" successfully" Jan 29 11:23:40.582612 containerd[1603]: time="2025-01-29T11:23:40.582419044Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" returns successfully" Jan 29 11:23:40.583275 kubelet[2833]: I0129 11:23:40.582988 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175" Jan 29 11:23:40.583275 kubelet[2833]: E0129 11:23:40.583231 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:40.583617 containerd[1603]: time="2025-01-29T11:23:40.583473658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:3,}" Jan 29 11:23:40.583617 containerd[1603]: time="2025-01-29T11:23:40.583515346Z" level=info msg="StopPodSandbox for \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\"" Jan 29 11:23:40.583630 systemd[1]: run-netns-cni\x2d7790cd53\x2dad85\x2d3c72\x2dc72b\x2d797524021054.mount: Deactivated successfully. Jan 29 11:23:40.588052 kubelet[2833]: I0129 11:23:40.587756 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256" Jan 29 11:23:40.588547 containerd[1603]: time="2025-01-29T11:23:40.588522286Z" level=info msg="StopPodSandbox for \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\"" Jan 29 11:23:40.589119 containerd[1603]: time="2025-01-29T11:23:40.589012198Z" level=info msg="Ensure that sandbox ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256 in task-service has been cleanup successfully" Jan 29 11:23:40.590337 kubelet[2833]: I0129 11:23:40.589757 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293" Jan 29 11:23:40.590386 containerd[1603]: time="2025-01-29T11:23:40.590103881Z" level=info msg="StopPodSandbox for \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\"" Jan 29 11:23:40.590386 containerd[1603]: time="2025-01-29T11:23:40.590247120Z" level=info msg="Ensure that sandbox 73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293 in task-service has been cleanup successfully" Jan 29 11:23:40.590504 containerd[1603]: time="2025-01-29T11:23:40.590490458Z" level=info msg="TearDown network for sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\" successfully" Jan 29 11:23:40.590614 containerd[1603]: time="2025-01-29T11:23:40.590601948Z" level=info msg="StopPodSandbox for \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\" returns successfully" Jan 29 11:23:40.590929 containerd[1603]: time="2025-01-29T11:23:40.590913664Z" level=info msg="StopPodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\"" Jan 29 11:23:40.591089 containerd[1603]: time="2025-01-29T11:23:40.591073676Z" level=info msg="TearDown network for sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" successfully" Jan 29 11:23:40.591149 containerd[1603]: time="2025-01-29T11:23:40.591134850Z" level=info msg="StopPodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" returns successfully" Jan 29 11:23:40.591438 containerd[1603]: time="2025-01-29T11:23:40.591330989Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\"" Jan 29 11:23:40.591438 containerd[1603]: time="2025-01-29T11:23:40.591398266Z" level=info msg="TearDown network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" successfully" Jan 29 11:23:40.591438 containerd[1603]: time="2025-01-29T11:23:40.591406462Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" returns successfully" Jan 29 11:23:40.591513 systemd[1]: run-netns-cni\x2d68e88e99\x2dc969\x2dba66\x2d37b7\x2dca3595c9f0ed.mount: Deactivated successfully. Jan 29 11:23:40.592015 kubelet[2833]: E0129 11:23:40.591665 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:40.592055 containerd[1603]: time="2025-01-29T11:23:40.591866588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:3,}" Jan 29 11:23:40.592132 containerd[1603]: time="2025-01-29T11:23:40.592052688Z" level=info msg="TearDown network for sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\" successfully" Jan 29 11:23:40.592132 containerd[1603]: time="2025-01-29T11:23:40.592129401Z" level=info msg="StopPodSandbox for \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\" returns successfully" Jan 29 11:23:40.592484 containerd[1603]: time="2025-01-29T11:23:40.592343404Z" level=info msg="StopPodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\"" Jan 29 11:23:40.592484 containerd[1603]: time="2025-01-29T11:23:40.592416061Z" level=info msg="TearDown network for sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" successfully" Jan 29 11:23:40.592484 containerd[1603]: time="2025-01-29T11:23:40.592427272Z" level=info msg="StopPodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" returns successfully" Jan 29 11:23:40.593253 containerd[1603]: time="2025-01-29T11:23:40.593088005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnqjx,Uid:35088270-b85c-4fff-9f47-df92a059da0a,Namespace:calico-system,Attempt:2,}" Jan 29 11:23:40.841445 containerd[1603]: time="2025-01-29T11:23:40.841380209Z" level=info msg="Ensure that sandbox 8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175 in task-service has been cleanup successfully" Jan 29 11:23:40.841636 containerd[1603]: time="2025-01-29T11:23:40.841594662Z" level=info msg="TearDown network for sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\" successfully" Jan 29 11:23:40.841636 containerd[1603]: time="2025-01-29T11:23:40.841611894Z" level=info msg="StopPodSandbox for \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\" returns successfully" Jan 29 11:23:40.842136 containerd[1603]: time="2025-01-29T11:23:40.842106305Z" level=info msg="StopPodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\"" Jan 29 11:23:40.842226 containerd[1603]: time="2025-01-29T11:23:40.842212294Z" level=info msg="TearDown network for sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" successfully" Jan 29 11:23:40.842226 containerd[1603]: time="2025-01-29T11:23:40.842224187Z" level=info msg="StopPodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" returns successfully" Jan 29 11:23:40.842678 containerd[1603]: time="2025-01-29T11:23:40.842566501Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\"" Jan 29 11:23:40.842799 containerd[1603]: time="2025-01-29T11:23:40.842756007Z" level=info msg="TearDown network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" successfully" Jan 29 11:23:40.842799 containerd[1603]: time="2025-01-29T11:23:40.842775524Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" returns successfully" Jan 29 11:23:40.843243 containerd[1603]: time="2025-01-29T11:23:40.843214580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:23:40.980635 containerd[1603]: time="2025-01-29T11:23:40.980498913Z" level=error msg="Failed to destroy network for sandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:40.981345 containerd[1603]: time="2025-01-29T11:23:40.981321951Z" level=error msg="encountered an error cleaning up failed sandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:40.981625 containerd[1603]: time="2025-01-29T11:23:40.981604172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnqjx,Uid:35088270-b85c-4fff-9f47-df92a059da0a,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:40.981944 kubelet[2833]: E0129 11:23:40.981903 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:40.982007 kubelet[2833]: E0129 11:23:40.981972 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:40.982007 kubelet[2833]: E0129 11:23:40.981994 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:40.982058 kubelet[2833]: E0129 11:23:40.982031 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gnqjx_calico-system(35088270-b85c-4fff-9f47-df92a059da0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gnqjx_calico-system(35088270-b85c-4fff-9f47-df92a059da0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gnqjx" podUID="35088270-b85c-4fff-9f47-df92a059da0a" Jan 29 11:23:40.985231 containerd[1603]: time="2025-01-29T11:23:40.985066235Z" level=error msg="Failed to destroy network for sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:40.985590 containerd[1603]: time="2025-01-29T11:23:40.985558912Z" level=error msg="encountered an error cleaning up failed sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:40.985633 containerd[1603]: time="2025-01-29T11:23:40.985609747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:40.986764 kubelet[2833]: E0129 11:23:40.986235 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:40.986764 kubelet[2833]: E0129 11:23:40.986297 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:40.986764 kubelet[2833]: E0129 11:23:40.986323 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:40.986878 kubelet[2833]: E0129 11:23:40.986373 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-695974dcd7-g2c9b_calico-system(530f2b50-c66a-4ebc-869f-eeb1d00efe6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-695974dcd7-g2c9b_calico-system(530f2b50-c66a-4ebc-869f-eeb1d00efe6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" podUID="530f2b50-c66a-4ebc-869f-eeb1d00efe6c" Jan 29 11:23:40.991897 containerd[1603]: time="2025-01-29T11:23:40.991847612Z" level=error msg="Failed to destroy network for sandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:40.992287 containerd[1603]: time="2025-01-29T11:23:40.992258024Z" level=error msg="encountered an error cleaning up failed sandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:40.992369 containerd[1603]: time="2025-01-29T11:23:40.992338806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:40.992609 kubelet[2833]: E0129 11:23:40.992573 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:40.992719 kubelet[2833]: E0129 11:23:40.992694 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:40.992749 kubelet[2833]: E0129 11:23:40.992723 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:40.992817 kubelet[2833]: E0129 11:23:40.992778 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q2fch_kube-system(254f70df-c108-425a-b324-8fe9c6bfe00e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q2fch_kube-system(254f70df-c108-425a-b324-8fe9c6bfe00e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q2fch" podUID="254f70df-c108-425a-b324-8fe9c6bfe00e" Jan 29 11:23:41.011616 containerd[1603]: time="2025-01-29T11:23:41.011575902Z" level=error msg="Failed to destroy network for sandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:41.012436 containerd[1603]: time="2025-01-29T11:23:41.012393138Z" level=error msg="encountered an error cleaning up failed sandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:41.012495 containerd[1603]: time="2025-01-29T11:23:41.012467910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:41.012751 kubelet[2833]: E0129 11:23:41.012705 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:41.012814 kubelet[2833]: E0129 11:23:41.012771 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:41.012814 kubelet[2833]: E0129 11:23:41.012800 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:41.012888 kubelet[2833]: E0129 11:23:41.012849 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8qdw4_kube-system(622451af-befd-4d1a-89be-df128077d7a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8qdw4_kube-system(622451af-befd-4d1a-89be-df128077d7a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8qdw4" podUID="622451af-befd-4d1a-89be-df128077d7a6" Jan 29 11:23:41.017183 containerd[1603]: time="2025-01-29T11:23:41.017146309Z" level=error msg="Failed to destroy network for sandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:41.017682 containerd[1603]: time="2025-01-29T11:23:41.017639907Z" level=error msg="encountered an error cleaning up failed sandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:41.017749 containerd[1603]: time="2025-01-29T11:23:41.017698678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:41.017953 kubelet[2833]: E0129 11:23:41.017871 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:41.018070 kubelet[2833]: E0129 11:23:41.017955 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:41.018070 kubelet[2833]: E0129 11:23:41.017997 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:41.018125 kubelet[2833]: E0129 11:23:41.018067 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9f5bc8c-l4mfg_calico-apiserver(337f98ac-1b65-4615-aa71-55b1dcfcd61e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9f5bc8c-l4mfg_calico-apiserver(337f98ac-1b65-4615-aa71-55b1dcfcd61e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" podUID="337f98ac-1b65-4615-aa71-55b1dcfcd61e" Jan 29 11:23:41.019999 containerd[1603]: time="2025-01-29T11:23:41.019820779Z" level=error msg="Failed to destroy network for sandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:41.020365 containerd[1603]: time="2025-01-29T11:23:41.020331039Z" level=error msg="encountered an error cleaning up failed sandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:41.020432 containerd[1603]: time="2025-01-29T11:23:41.020402894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:41.020871 kubelet[2833]: E0129 11:23:41.020714 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:41.020980 kubelet[2833]: E0129 11:23:41.020941 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:41.021037 kubelet[2833]: E0129 11:23:41.020982 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:41.021037 kubelet[2833]: E0129 11:23:41.021013 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9f5bc8c-kfgmw_calico-apiserver(10ddf1c0-21b1-4d7e-af9d-b4ca369b7742)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9f5bc8c-kfgmw_calico-apiserver(10ddf1c0-21b1-4d7e-af9d-b4ca369b7742)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" podUID="10ddf1c0-21b1-4d7e-af9d-b4ca369b7742" Jan 29 11:23:41.382945 systemd[1]: Started sshd@8-10.0.0.145:22-10.0.0.1:51158.service - OpenSSH per-connection server daemon (10.0.0.1:51158). Jan 29 11:23:41.425523 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 51158 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:23:41.427169 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:41.431928 systemd-logind[1584]: New session 9 of user core. Jan 29 11:23:41.437027 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:23:41.547822 systemd[1]: run-netns-cni\x2d7a5d5d65\x2dc991\x2d4f86\x2d60ec\x2dc870011f486d.mount: Deactivated successfully. Jan 29 11:23:41.548003 systemd[1]: run-netns-cni\x2d14c3e9b8\x2d9388\x2db487\x2d018f\x2df340b64f8b5c.mount: Deactivated successfully. Jan 29 11:23:41.572117 sshd[4417]: Connection closed by 10.0.0.1 port 51158 Jan 29 11:23:41.571075 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:41.574613 systemd[1]: sshd@8-10.0.0.145:22-10.0.0.1:51158.service: Deactivated successfully. Jan 29 11:23:41.578675 systemd-logind[1584]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:23:41.578740 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:23:41.580382 systemd-logind[1584]: Removed session 9. Jan 29 11:23:41.598980 kubelet[2833]: I0129 11:23:41.598954 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2" Jan 29 11:23:41.599787 containerd[1603]: time="2025-01-29T11:23:41.599682068Z" level=info msg="StopPodSandbox for \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\"" Jan 29 11:23:41.600925 containerd[1603]: time="2025-01-29T11:23:41.599881663Z" level=info msg="Ensure that sandbox 553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2 in task-service has been cleanup successfully" Jan 29 11:23:41.601414 containerd[1603]: time="2025-01-29T11:23:41.601386092Z" level=info msg="TearDown network for sandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\" successfully" Jan 29 11:23:41.601452 containerd[1603]: time="2025-01-29T11:23:41.601414365Z" level=info msg="StopPodSandbox for \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\" returns successfully" Jan 29 11:23:41.601802 containerd[1603]: time="2025-01-29T11:23:41.601768932Z" level=info msg="StopPodSandbox for \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\"" Jan 29 11:23:41.602362 kubelet[2833]: I0129 11:23:41.602311 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a" Jan 29 11:23:41.603380 containerd[1603]: time="2025-01-29T11:23:41.603344877Z" level=info msg="StopPodSandbox for \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\"" Jan 29 11:23:41.603810 containerd[1603]: time="2025-01-29T11:23:41.603548830Z" level=info msg="Ensure that sandbox ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a in task-service has been cleanup successfully" Jan 29 11:23:41.603596 systemd[1]: run-netns-cni\x2d24d0f940\x2da088\x2d23b5\x2deb44\x2db5f44f89c1a6.mount: Deactivated successfully. Jan 29 11:23:41.607659 containerd[1603]: time="2025-01-29T11:23:41.604719032Z" level=info msg="TearDown network for sandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\" successfully" Jan 29 11:23:41.607659 containerd[1603]: time="2025-01-29T11:23:41.604736014Z" level=info msg="StopPodSandbox for \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\" returns successfully" Jan 29 11:23:41.607659 containerd[1603]: time="2025-01-29T11:23:41.604945978Z" level=info msg="StopPodSandbox for \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\"" Jan 29 11:23:41.607659 containerd[1603]: time="2025-01-29T11:23:41.605021360Z" level=info msg="TearDown network for sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\" successfully" Jan 29 11:23:41.607659 containerd[1603]: time="2025-01-29T11:23:41.605033583Z" level=info msg="StopPodSandbox for \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\" returns successfully" Jan 29 11:23:41.607659 containerd[1603]: time="2025-01-29T11:23:41.607608646Z" level=info msg="StopPodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\"" Jan 29 11:23:41.607160 systemd[1]: run-netns-cni\x2dac91ed9c\x2da1be\x2d0621\x2d8c6d\x2d2f4db7fc4dda.mount: Deactivated successfully. Jan 29 11:23:41.607916 kubelet[2833]: I0129 11:23:41.607792 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8" Jan 29 11:23:41.607949 containerd[1603]: time="2025-01-29T11:23:41.607717551Z" level=info msg="TearDown network for sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" successfully" Jan 29 11:23:41.607949 containerd[1603]: time="2025-01-29T11:23:41.607728702Z" level=info msg="StopPodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" returns successfully" Jan 29 11:23:41.608958 containerd[1603]: time="2025-01-29T11:23:41.608935752Z" level=info msg="TearDown network for sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\" successfully" Jan 29 11:23:41.609001 containerd[1603]: time="2025-01-29T11:23:41.608957583Z" level=info msg="StopPodSandbox for \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\" returns successfully" Jan 29 11:23:41.609281 containerd[1603]: time="2025-01-29T11:23:41.609260744Z" level=info msg="StopPodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\"" Jan 29 11:23:41.609359 containerd[1603]: time="2025-01-29T11:23:41.609343299Z" level=info msg="TearDown network for sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" successfully" Jan 29 11:23:41.610227 containerd[1603]: time="2025-01-29T11:23:41.609356534Z" level=info msg="StopPodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" returns successfully" Jan 29 11:23:41.610227 containerd[1603]: time="2025-01-29T11:23:41.609390167Z" level=info msg="StopPodSandbox for \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\"" Jan 29 11:23:41.610227 containerd[1603]: time="2025-01-29T11:23:41.609519190Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\"" Jan 29 11:23:41.610227 containerd[1603]: time="2025-01-29T11:23:41.609533146Z" level=info msg="Ensure that sandbox 283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8 in task-service has been cleanup successfully" Jan 29 11:23:41.610227 containerd[1603]: time="2025-01-29T11:23:41.609593058Z" level=info msg="TearDown network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" successfully" Jan 29 11:23:41.610227 containerd[1603]: time="2025-01-29T11:23:41.609601815Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" returns successfully" Jan 29 11:23:41.610227 containerd[1603]: time="2025-01-29T11:23:41.609728333Z" level=info msg="TearDown network for sandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\" successfully" Jan 29 11:23:41.610227 containerd[1603]: time="2025-01-29T11:23:41.609742349Z" level=info msg="StopPodSandbox for \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\" returns successfully" Jan 29 11:23:41.610454 kubelet[2833]: E0129 11:23:41.610035 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:41.610488 containerd[1603]: time="2025-01-29T11:23:41.610262999Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\"" Jan 29 11:23:41.610488 containerd[1603]: time="2025-01-29T11:23:41.610335895Z" level=info msg="TearDown network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" successfully" Jan 29 11:23:41.610488 containerd[1603]: time="2025-01-29T11:23:41.610344783Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" returns successfully" Jan 29 11:23:41.610488 containerd[1603]: time="2025-01-29T11:23:41.610464107Z" level=info msg="StopPodSandbox for \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\"" Jan 29 11:23:41.610571 containerd[1603]: time="2025-01-29T11:23:41.610525783Z" level=info msg="TearDown network for sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\" successfully" Jan 29 11:23:41.610571 containerd[1603]: time="2025-01-29T11:23:41.610534329Z" level=info msg="StopPodSandbox for \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\" returns successfully" Jan 29 11:23:41.610758 containerd[1603]: time="2025-01-29T11:23:41.610710591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:4,}" Jan 29 11:23:41.611018 containerd[1603]: time="2025-01-29T11:23:41.610989826Z" level=info msg="StopPodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\"" Jan 29 11:23:41.611165 containerd[1603]: time="2025-01-29T11:23:41.611071209Z" level=info msg="TearDown network for sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" successfully" Jan 29 11:23:41.611165 containerd[1603]: time="2025-01-29T11:23:41.611085306Z" level=info msg="StopPodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" returns successfully" Jan 29 11:23:41.611226 containerd[1603]: time="2025-01-29T11:23:41.611182007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:23:41.611461 containerd[1603]: time="2025-01-29T11:23:41.611431626Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\"" Jan 29 11:23:41.612119 containerd[1603]: time="2025-01-29T11:23:41.611700162Z" level=info msg="TearDown network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" successfully" Jan 29 11:23:41.612119 containerd[1603]: time="2025-01-29T11:23:41.611720440Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" returns successfully" Jan 29 11:23:41.611946 systemd[1]: run-netns-cni\x2de3a16be1\x2d62ce\x2d6d22\x2dc770\x2d66977722909c.mount: Deactivated successfully. Jan 29 11:23:41.612394 kubelet[2833]: I0129 11:23:41.611858 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759" Jan 29 11:23:41.612394 kubelet[2833]: E0129 11:23:41.611998 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:41.612460 containerd[1603]: time="2025-01-29T11:23:41.612317563Z" level=info msg="StopPodSandbox for \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\"" Jan 29 11:23:41.612833 containerd[1603]: time="2025-01-29T11:23:41.612702688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:4,}" Jan 29 11:23:41.615468 kubelet[2833]: I0129 11:23:41.615438 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4" Jan 29 11:23:41.616458 containerd[1603]: time="2025-01-29T11:23:41.616143980Z" level=info msg="StopPodSandbox for \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\"" Jan 29 11:23:41.616458 containerd[1603]: time="2025-01-29T11:23:41.616322296Z" level=info msg="Ensure that sandbox 1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4 in task-service has been cleanup successfully" Jan 29 11:23:41.616853 containerd[1603]: time="2025-01-29T11:23:41.616836713Z" level=info msg="TearDown network for sandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\" successfully" Jan 29 11:23:41.616921 containerd[1603]: time="2025-01-29T11:23:41.616910382Z" level=info msg="StopPodSandbox for \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\" returns successfully" Jan 29 11:23:41.619238 systemd[1]: run-netns-cni\x2d3d4e8cf3\x2d0c11\x2d3da3\x2dca74\x2d61ade94ef0e3.mount: Deactivated successfully. Jan 29 11:23:41.619536 containerd[1603]: time="2025-01-29T11:23:41.619519068Z" level=info msg="StopPodSandbox for \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\"" Jan 29 11:23:41.619784 containerd[1603]: time="2025-01-29T11:23:41.619724654Z" level=info msg="TearDown network for sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\" successfully" Jan 29 11:23:41.619872 containerd[1603]: time="2025-01-29T11:23:41.619858096Z" level=info msg="StopPodSandbox for \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\" returns successfully" Jan 29 11:23:41.619989 kubelet[2833]: I0129 11:23:41.619956 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3" Jan 29 11:23:41.620364 containerd[1603]: time="2025-01-29T11:23:41.620348198Z" level=info msg="StopPodSandbox for \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\"" Jan 29 11:23:41.620660 containerd[1603]: time="2025-01-29T11:23:41.620588189Z" level=info msg="Ensure that sandbox b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3 in task-service has been cleanup successfully" Jan 29 11:23:41.620660 containerd[1603]: time="2025-01-29T11:23:41.620623024Z" level=info msg="StopPodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\"" Jan 29 11:23:41.620838 containerd[1603]: time="2025-01-29T11:23:41.620722201Z" level=info msg="TearDown network for sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" successfully" Jan 29 11:23:41.620838 containerd[1603]: time="2025-01-29T11:23:41.620737470Z" level=info msg="StopPodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" returns successfully" Jan 29 11:23:41.621035 containerd[1603]: time="2025-01-29T11:23:41.620942185Z" level=info msg="TearDown network for sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\" successfully" Jan 29 11:23:41.621035 containerd[1603]: time="2025-01-29T11:23:41.621020451Z" level=info msg="StopPodSandbox for \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\" returns successfully" Jan 29 11:23:41.621419 containerd[1603]: time="2025-01-29T11:23:41.621275913Z" level=info msg="StopPodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\"" Jan 29 11:23:41.621419 containerd[1603]: time="2025-01-29T11:23:41.621332469Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\"" Jan 29 11:23:41.621419 containerd[1603]: time="2025-01-29T11:23:41.621354380Z" level=info msg="TearDown network for sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" successfully" Jan 29 11:23:41.621419 containerd[1603]: time="2025-01-29T11:23:41.621371191Z" level=info msg="StopPodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" returns successfully" Jan 29 11:23:41.621419 containerd[1603]: time="2025-01-29T11:23:41.621409424Z" level=info msg="TearDown network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" successfully" Jan 29 11:23:41.621419 containerd[1603]: time="2025-01-29T11:23:41.621419082Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" returns successfully" Jan 29 11:23:41.621868 containerd[1603]: time="2025-01-29T11:23:41.621814255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:23:41.621972 containerd[1603]: time="2025-01-29T11:23:41.621828121Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\"" Jan 29 11:23:41.622101 containerd[1603]: time="2025-01-29T11:23:41.622057061Z" level=info msg="TearDown network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" successfully" Jan 29 11:23:41.622101 containerd[1603]: time="2025-01-29T11:23:41.622072330Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" returns successfully" Jan 29 11:23:41.622426 containerd[1603]: time="2025-01-29T11:23:41.622288617Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\"" Jan 29 11:23:41.622426 containerd[1603]: time="2025-01-29T11:23:41.622371233Z" level=info msg="TearDown network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" successfully" Jan 29 11:23:41.622426 containerd[1603]: time="2025-01-29T11:23:41.622380941Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" returns successfully" Jan 29 11:23:41.622907 containerd[1603]: time="2025-01-29T11:23:41.622709078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:4,}" Jan 29 11:23:41.928453 containerd[1603]: time="2025-01-29T11:23:41.928414337Z" level=info msg="Ensure that sandbox 2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759 in task-service has been cleanup successfully" Jan 29 11:23:41.929237 containerd[1603]: time="2025-01-29T11:23:41.928973378Z" level=info msg="TearDown network for sandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\" successfully" Jan 29 11:23:41.929237 containerd[1603]: time="2025-01-29T11:23:41.928994348Z" level=info msg="StopPodSandbox for \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\" returns successfully" Jan 29 11:23:41.930347 containerd[1603]: time="2025-01-29T11:23:41.930318418Z" level=info msg="StopPodSandbox for \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\"" Jan 29 11:23:41.930523 containerd[1603]: time="2025-01-29T11:23:41.930395162Z" level=info msg="TearDown network for sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\" successfully" Jan 29 11:23:41.930523 containerd[1603]: time="2025-01-29T11:23:41.930404229Z" level=info msg="StopPodSandbox for \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\" returns successfully" Jan 29 11:23:41.930947 containerd[1603]: time="2025-01-29T11:23:41.930915942Z" level=info msg="StopPodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\"" Jan 29 11:23:41.931084 containerd[1603]: time="2025-01-29T11:23:41.930998828Z" level=info msg="TearDown network for sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" successfully" Jan 29 11:23:41.931084 containerd[1603]: time="2025-01-29T11:23:41.931008877Z" level=info msg="StopPodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" returns successfully" Jan 29 11:23:41.931504 containerd[1603]: time="2025-01-29T11:23:41.931469142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnqjx,Uid:35088270-b85c-4fff-9f47-df92a059da0a,Namespace:calico-system,Attempt:3,}" Jan 29 11:23:42.497585 containerd[1603]: time="2025-01-29T11:23:42.497506739Z" level=error msg="Failed to destroy network for sandbox \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.541184 containerd[1603]: time="2025-01-29T11:23:42.500134772Z" level=error msg="Failed to destroy network for sandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.541184 containerd[1603]: time="2025-01-29T11:23:42.501126356Z" level=error msg="encountered an error cleaning up failed sandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.541184 containerd[1603]: time="2025-01-29T11:23:42.501172583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.541184 containerd[1603]: time="2025-01-29T11:23:42.512429963Z" level=error msg="Failed to destroy network for sandbox \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.541184 containerd[1603]: time="2025-01-29T11:23:42.513295611Z" level=error msg="encountered an error cleaning up failed sandbox \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.541184 containerd[1603]: time="2025-01-29T11:23:42.513329334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.541462 kubelet[2833]: E0129 11:23:42.501518 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.541462 kubelet[2833]: E0129 11:23:42.501597 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:42.541462 kubelet[2833]: E0129 11:23:42.501615 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:42.541614 kubelet[2833]: E0129 11:23:42.501666 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-695974dcd7-g2c9b_calico-system(530f2b50-c66a-4ebc-869f-eeb1d00efe6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-695974dcd7-g2c9b_calico-system(530f2b50-c66a-4ebc-869f-eeb1d00efe6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" podUID="530f2b50-c66a-4ebc-869f-eeb1d00efe6c" Jan 29 11:23:42.541614 kubelet[2833]: E0129 11:23:42.513531 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.541614 kubelet[2833]: E0129 11:23:42.513581 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:42.542864 kubelet[2833]: E0129 11:23:42.513600 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:42.542864 kubelet[2833]: E0129 11:23:42.513656 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9f5bc8c-l4mfg_calico-apiserver(337f98ac-1b65-4615-aa71-55b1dcfcd61e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9f5bc8c-l4mfg_calico-apiserver(337f98ac-1b65-4615-aa71-55b1dcfcd61e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" podUID="337f98ac-1b65-4615-aa71-55b1dcfcd61e" Jan 29 11:23:42.553254 systemd[1]: run-netns-cni\x2db0bb445f\x2d9570\x2da026\x2db77f\x2d3cba73799cd5.mount: Deactivated successfully. Jan 29 11:23:42.553713 systemd[1]: run-netns-cni\x2d805ba510\x2d79ad\x2d2e2e\x2d74b7\x2d75617e167a11.mount: Deactivated successfully. Jan 29 11:23:42.556905 containerd[1603]: time="2025-01-29T11:23:42.556344264Z" level=error msg="Failed to destroy network for sandbox \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.557743 containerd[1603]: time="2025-01-29T11:23:42.557707588Z" level=error msg="encountered an error cleaning up failed sandbox \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.557865 containerd[1603]: time="2025-01-29T11:23:42.557848373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.558507 kubelet[2833]: E0129 11:23:42.558129 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.558507 kubelet[2833]: E0129 11:23:42.558200 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:42.558507 kubelet[2833]: E0129 11:23:42.558234 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:42.558654 kubelet[2833]: E0129 11:23:42.558274 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q2fch_kube-system(254f70df-c108-425a-b324-8fe9c6bfe00e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q2fch_kube-system(254f70df-c108-425a-b324-8fe9c6bfe00e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q2fch" podUID="254f70df-c108-425a-b324-8fe9c6bfe00e" Jan 29 11:23:42.559280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda-shm.mount: Deactivated successfully. Jan 29 11:23:42.560150 containerd[1603]: time="2025-01-29T11:23:42.560100689Z" level=error msg="encountered an error cleaning up failed sandbox \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.560213 containerd[1603]: time="2025-01-29T11:23:42.560191980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.560376 kubelet[2833]: E0129 11:23:42.560352 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.560481 kubelet[2833]: E0129 11:23:42.560463 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:42.560554 kubelet[2833]: E0129 11:23:42.560539 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:42.560669 kubelet[2833]: E0129 11:23:42.560633 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9f5bc8c-kfgmw_calico-apiserver(10ddf1c0-21b1-4d7e-af9d-b4ca369b7742)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9f5bc8c-kfgmw_calico-apiserver(10ddf1c0-21b1-4d7e-af9d-b4ca369b7742)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" podUID="10ddf1c0-21b1-4d7e-af9d-b4ca369b7742" Jan 29 11:23:42.565959 containerd[1603]: time="2025-01-29T11:23:42.565908911Z" level=error msg="Failed to destroy network for sandbox \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.568608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76-shm.mount: Deactivated successfully. Jan 29 11:23:42.570191 containerd[1603]: time="2025-01-29T11:23:42.569127974Z" level=error msg="encountered an error cleaning up failed sandbox \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.570191 containerd[1603]: time="2025-01-29T11:23:42.569185844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.570308 kubelet[2833]: E0129 11:23:42.569372 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.570308 kubelet[2833]: E0129 11:23:42.569421 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:42.570308 kubelet[2833]: E0129 11:23:42.569442 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:42.570408 kubelet[2833]: E0129 11:23:42.569474 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8qdw4_kube-system(622451af-befd-4d1a-89be-df128077d7a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8qdw4_kube-system(622451af-befd-4d1a-89be-df128077d7a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8qdw4" podUID="622451af-befd-4d1a-89be-df128077d7a6" Jan 29 11:23:42.606775 containerd[1603]: time="2025-01-29T11:23:42.606718745Z" level=error msg="Failed to destroy network for sandbox \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.609405 containerd[1603]: time="2025-01-29T11:23:42.609368437Z" level=error msg="encountered an error cleaning up failed sandbox \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.609497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75-shm.mount: Deactivated successfully. Jan 29 11:23:42.609925 containerd[1603]: time="2025-01-29T11:23:42.609628086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnqjx,Uid:35088270-b85c-4fff-9f47-df92a059da0a,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.609977 kubelet[2833]: E0129 11:23:42.609873 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:42.609977 kubelet[2833]: E0129 11:23:42.609950 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:42.610305 kubelet[2833]: E0129 11:23:42.609992 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:42.610305 kubelet[2833]: E0129 11:23:42.610040 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gnqjx_calico-system(35088270-b85c-4fff-9f47-df92a059da0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gnqjx_calico-system(35088270-b85c-4fff-9f47-df92a059da0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gnqjx" podUID="35088270-b85c-4fff-9f47-df92a059da0a" Jan 29 11:23:42.624069 kubelet[2833]: I0129 11:23:42.623785 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14" Jan 29 11:23:42.624393 containerd[1603]: time="2025-01-29T11:23:42.624352445Z" level=info msg="StopPodSandbox for \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\"" Jan 29 11:23:42.624580 containerd[1603]: time="2025-01-29T11:23:42.624559114Z" level=info msg="Ensure that sandbox eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14 in task-service has been cleanup successfully" Jan 29 11:23:42.627637 kubelet[2833]: I0129 11:23:42.627179 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76" Jan 29 11:23:42.627693 containerd[1603]: time="2025-01-29T11:23:42.627324183Z" level=info msg="TearDown network for sandbox \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\" successfully" Jan 29 11:23:42.627693 containerd[1603]: time="2025-01-29T11:23:42.627340915Z" level=info msg="StopPodSandbox for \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\" returns successfully" Jan 29 11:23:42.627693 containerd[1603]: time="2025-01-29T11:23:42.627687026Z" level=info msg="StopPodSandbox for \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\"" Jan 29 11:23:42.627782 containerd[1603]: time="2025-01-29T11:23:42.627769460Z" level=info msg="TearDown network for sandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\" successfully" Jan 29 11:23:42.627815 containerd[1603]: time="2025-01-29T11:23:42.627780150Z" level=info msg="StopPodSandbox for \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\" returns successfully" Jan 29 11:23:42.628051 containerd[1603]: time="2025-01-29T11:23:42.627927688Z" level=info msg="StopPodSandbox for \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\"" Jan 29 11:23:42.627991 systemd[1]: run-netns-cni\x2d71500d98\x2d8e5d\x2df55f\x2d842c\x2df6ab4e318ea6.mount: Deactivated successfully. Jan 29 11:23:42.628142 containerd[1603]: time="2025-01-29T11:23:42.628061891Z" level=info msg="Ensure that sandbox 8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76 in task-service has been cleanup successfully" Jan 29 11:23:42.628509 containerd[1603]: time="2025-01-29T11:23:42.628483363Z" level=info msg="StopPodSandbox for \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\"" Jan 29 11:23:42.628601 containerd[1603]: time="2025-01-29T11:23:42.628574615Z" level=info msg="TearDown network for sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\" successfully" Jan 29 11:23:42.628883 containerd[1603]: time="2025-01-29T11:23:42.628599071Z" level=info msg="StopPodSandbox for \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\" returns successfully" Jan 29 11:23:42.628883 containerd[1603]: time="2025-01-29T11:23:42.628810348Z" level=info msg="TearDown network for sandbox \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\" successfully" Jan 29 11:23:42.628883 containerd[1603]: time="2025-01-29T11:23:42.628821790Z" level=info msg="StopPodSandbox for \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\" returns successfully" Jan 29 11:23:42.629489 containerd[1603]: time="2025-01-29T11:23:42.629445222Z" level=info msg="StopPodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\"" Jan 29 11:23:42.629552 containerd[1603]: time="2025-01-29T11:23:42.629520974Z" level=info msg="StopPodSandbox for \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\"" Jan 29 11:23:42.629775 containerd[1603]: time="2025-01-29T11:23:42.629566320Z" level=info msg="TearDown network for sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" successfully" Jan 29 11:23:42.629775 containerd[1603]: time="2025-01-29T11:23:42.629579415Z" level=info msg="StopPodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" returns successfully" Jan 29 11:23:42.629775 containerd[1603]: time="2025-01-29T11:23:42.629598180Z" level=info msg="TearDown network for sandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\" successfully" Jan 29 11:23:42.629775 containerd[1603]: time="2025-01-29T11:23:42.629610292Z" level=info msg="StopPodSandbox for \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\" returns successfully" Jan 29 11:23:42.630488 containerd[1603]: time="2025-01-29T11:23:42.630040341Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\"" Jan 29 11:23:42.630488 containerd[1603]: time="2025-01-29T11:23:42.630116004Z" level=info msg="TearDown network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" successfully" Jan 29 11:23:42.630488 containerd[1603]: time="2025-01-29T11:23:42.630125121Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" returns successfully" Jan 29 11:23:42.630488 containerd[1603]: time="2025-01-29T11:23:42.630286203Z" level=info msg="StopPodSandbox for \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\"" Jan 29 11:23:42.630488 containerd[1603]: time="2025-01-29T11:23:42.630386733Z" level=info msg="TearDown network for sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\" successfully" Jan 29 11:23:42.630488 containerd[1603]: time="2025-01-29T11:23:42.630396882Z" level=info msg="StopPodSandbox for \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\" returns successfully" Jan 29 11:23:42.630633 kubelet[2833]: E0129 11:23:42.630275 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:42.630698 containerd[1603]: time="2025-01-29T11:23:42.630462234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:5,}" Jan 29 11:23:42.631057 containerd[1603]: time="2025-01-29T11:23:42.631002701Z" level=info msg="StopPodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\"" Jan 29 11:23:42.631101 containerd[1603]: time="2025-01-29T11:23:42.631079645Z" level=info msg="TearDown network for sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" successfully" Jan 29 11:23:42.631101 containerd[1603]: time="2025-01-29T11:23:42.631089625Z" level=info msg="StopPodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" returns successfully" Jan 29 11:23:42.631440 containerd[1603]: time="2025-01-29T11:23:42.631299299Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\"" Jan 29 11:23:42.631440 containerd[1603]: time="2025-01-29T11:23:42.631376123Z" level=info msg="TearDown network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" successfully" Jan 29 11:23:42.631440 containerd[1603]: time="2025-01-29T11:23:42.631385631Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" returns successfully" Jan 29 11:23:42.631747 kubelet[2833]: I0129 11:23:42.631717 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda" Jan 29 11:23:42.631795 kubelet[2833]: E0129 11:23:42.631776 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:42.632010 containerd[1603]: time="2025-01-29T11:23:42.631987903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:5,}" Jan 29 11:23:42.632315 containerd[1603]: time="2025-01-29T11:23:42.632285774Z" level=info msg="StopPodSandbox for \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\"" Jan 29 11:23:42.632710 containerd[1603]: time="2025-01-29T11:23:42.632480851Z" level=info msg="Ensure that sandbox 08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda in task-service has been cleanup successfully" Jan 29 11:23:42.632837 containerd[1603]: time="2025-01-29T11:23:42.632762670Z" level=info msg="TearDown network for sandbox \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\" successfully" Jan 29 11:23:42.633121 containerd[1603]: time="2025-01-29T11:23:42.632876925Z" level=info msg="StopPodSandbox for \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\" returns successfully" Jan 29 11:23:42.633417 containerd[1603]: time="2025-01-29T11:23:42.633394268Z" level=info msg="StopPodSandbox for \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\"" Jan 29 11:23:42.633486 containerd[1603]: time="2025-01-29T11:23:42.633466153Z" level=info msg="TearDown network for sandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\" successfully" Jan 29 11:23:42.633486 containerd[1603]: time="2025-01-29T11:23:42.633482394Z" level=info msg="StopPodSandbox for \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\" returns successfully" Jan 29 11:23:42.633918 containerd[1603]: time="2025-01-29T11:23:42.633895721Z" level=info msg="StopPodSandbox for \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\"" Jan 29 11:23:42.633994 containerd[1603]: time="2025-01-29T11:23:42.633968337Z" level=info msg="TearDown network for sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\" successfully" Jan 29 11:23:42.633994 containerd[1603]: time="2025-01-29T11:23:42.633985380Z" level=info msg="StopPodSandbox for \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\" returns successfully" Jan 29 11:23:42.634414 containerd[1603]: time="2025-01-29T11:23:42.634199072Z" level=info msg="StopPodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\"" Jan 29 11:23:42.634414 containerd[1603]: time="2025-01-29T11:23:42.634280555Z" level=info msg="TearDown network for sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" successfully" Jan 29 11:23:42.634414 containerd[1603]: time="2025-01-29T11:23:42.634290373Z" level=info msg="StopPodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" returns successfully" Jan 29 11:23:42.634497 containerd[1603]: time="2025-01-29T11:23:42.634413245Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\"" Jan 29 11:23:42.634497 containerd[1603]: time="2025-01-29T11:23:42.634478467Z" level=info msg="TearDown network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" successfully" Jan 29 11:23:42.634497 containerd[1603]: time="2025-01-29T11:23:42.634487975Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" returns successfully" Jan 29 11:23:42.634583 kubelet[2833]: I0129 11:23:42.634563 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75" Jan 29 11:23:42.635283 containerd[1603]: time="2025-01-29T11:23:42.634832343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:23:42.635283 containerd[1603]: time="2025-01-29T11:23:42.635015838Z" level=info msg="StopPodSandbox for \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\"" Jan 29 11:23:42.635283 containerd[1603]: time="2025-01-29T11:23:42.635175147Z" level=info msg="Ensure that sandbox f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75 in task-service has been cleanup successfully" Jan 29 11:23:42.635422 containerd[1603]: time="2025-01-29T11:23:42.635406162Z" level=info msg="TearDown network for sandbox \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\" successfully" Jan 29 11:23:42.635488 containerd[1603]: time="2025-01-29T11:23:42.635455675Z" level=info msg="StopPodSandbox for \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\" returns successfully" Jan 29 11:23:42.636041 containerd[1603]: time="2025-01-29T11:23:42.636014656Z" level=info msg="StopPodSandbox for \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\"" Jan 29 11:23:42.636098 containerd[1603]: time="2025-01-29T11:23:42.636084657Z" level=info msg="TearDown network for sandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\" successfully" Jan 29 11:23:42.636098 containerd[1603]: time="2025-01-29T11:23:42.636095077Z" level=info msg="StopPodSandbox for \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\" returns successfully" Jan 29 11:23:42.636388 containerd[1603]: time="2025-01-29T11:23:42.636360847Z" level=info msg="StopPodSandbox for \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\"" Jan 29 11:23:42.636455 containerd[1603]: time="2025-01-29T11:23:42.636438523Z" level=info msg="TearDown network for sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\" successfully" Jan 29 11:23:42.636480 containerd[1603]: time="2025-01-29T11:23:42.636452669Z" level=info msg="StopPodSandbox for \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\" returns successfully" Jan 29 11:23:42.637061 containerd[1603]: time="2025-01-29T11:23:42.637015738Z" level=info msg="StopPodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\"" Jan 29 11:23:42.637105 containerd[1603]: time="2025-01-29T11:23:42.637093625Z" level=info msg="TearDown network for sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" successfully" Jan 29 11:23:42.637131 containerd[1603]: time="2025-01-29T11:23:42.637103484Z" level=info msg="StopPodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" returns successfully" Jan 29 11:23:42.637702 kubelet[2833]: I0129 11:23:42.637638 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b" Jan 29 11:23:42.637781 containerd[1603]: time="2025-01-29T11:23:42.637712469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnqjx,Uid:35088270-b85c-4fff-9f47-df92a059da0a,Namespace:calico-system,Attempt:4,}" Jan 29 11:23:42.638368 containerd[1603]: time="2025-01-29T11:23:42.638351691Z" level=info msg="StopPodSandbox for \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\"" Jan 29 11:23:42.638659 containerd[1603]: time="2025-01-29T11:23:42.638536208Z" level=info msg="Ensure that sandbox 9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b in task-service has been cleanup successfully" Jan 29 11:23:42.638750 containerd[1603]: time="2025-01-29T11:23:42.638730383Z" level=info msg="TearDown network for sandbox \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\" successfully" Jan 29 11:23:42.638824 containerd[1603]: time="2025-01-29T11:23:42.638811264Z" level=info msg="StopPodSandbox for \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\" returns successfully" Jan 29 11:23:42.639193 containerd[1603]: time="2025-01-29T11:23:42.639177503Z" level=info msg="StopPodSandbox for \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\"" Jan 29 11:23:42.639310 containerd[1603]: time="2025-01-29T11:23:42.639297539Z" level=info msg="TearDown network for sandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\" successfully" Jan 29 11:23:42.639431 containerd[1603]: time="2025-01-29T11:23:42.639345419Z" level=info msg="StopPodSandbox for \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\" returns successfully" Jan 29 11:23:42.639704 containerd[1603]: time="2025-01-29T11:23:42.639681802Z" level=info msg="StopPodSandbox for \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\"" Jan 29 11:23:42.639800 containerd[1603]: time="2025-01-29T11:23:42.639781078Z" level=info msg="TearDown network for sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\" successfully" Jan 29 11:23:42.639836 containerd[1603]: time="2025-01-29T11:23:42.639798442Z" level=info msg="StopPodSandbox for \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\" returns successfully" Jan 29 11:23:42.640453 containerd[1603]: time="2025-01-29T11:23:42.640123432Z" level=info msg="StopPodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\"" Jan 29 11:23:42.640453 containerd[1603]: time="2025-01-29T11:23:42.640220966Z" level=info msg="TearDown network for sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" successfully" Jan 29 11:23:42.640453 containerd[1603]: time="2025-01-29T11:23:42.640233760Z" level=info msg="StopPodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" returns successfully" Jan 29 11:23:42.640572 containerd[1603]: time="2025-01-29T11:23:42.640529616Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\"" Jan 29 11:23:42.640670 containerd[1603]: time="2025-01-29T11:23:42.640623082Z" level=info msg="TearDown network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" successfully" Jan 29 11:23:42.640670 containerd[1603]: time="2025-01-29T11:23:42.640657316Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" returns successfully" Jan 29 11:23:42.641591 containerd[1603]: time="2025-01-29T11:23:42.641555786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:23:42.642232 kubelet[2833]: I0129 11:23:42.641942 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c" Jan 29 11:23:42.642329 containerd[1603]: time="2025-01-29T11:23:42.642304303Z" level=info msg="StopPodSandbox for \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\"" Jan 29 11:23:42.642566 containerd[1603]: time="2025-01-29T11:23:42.642532914Z" level=info msg="Ensure that sandbox fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c in task-service has been cleanup successfully" Jan 29 11:23:42.643053 containerd[1603]: time="2025-01-29T11:23:42.642726087Z" level=info msg="TearDown network for sandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\" successfully" Jan 29 11:23:42.643053 containerd[1603]: time="2025-01-29T11:23:42.642747567Z" level=info msg="StopPodSandbox for \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\" returns successfully" Jan 29 11:23:42.643053 containerd[1603]: time="2025-01-29T11:23:42.643009149Z" level=info msg="StopPodSandbox for \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\"" Jan 29 11:23:42.643122 containerd[1603]: time="2025-01-29T11:23:42.643097255Z" level=info msg="TearDown network for sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\" successfully" Jan 29 11:23:42.643122 containerd[1603]: time="2025-01-29T11:23:42.643107725Z" level=info msg="StopPodSandbox for \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\" returns successfully" Jan 29 11:23:42.643765 containerd[1603]: time="2025-01-29T11:23:42.643516503Z" level=info msg="StopPodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\"" Jan 29 11:23:42.643765 containerd[1603]: time="2025-01-29T11:23:42.643606803Z" level=info msg="TearDown network for sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" successfully" Jan 29 11:23:42.643765 containerd[1603]: time="2025-01-29T11:23:42.643616731Z" level=info msg="StopPodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" returns successfully" Jan 29 11:23:42.645896 containerd[1603]: time="2025-01-29T11:23:42.645868216Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\"" Jan 29 11:23:42.646008 containerd[1603]: time="2025-01-29T11:23:42.645982431Z" level=info msg="TearDown network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" successfully" Jan 29 11:23:42.646050 containerd[1603]: time="2025-01-29T11:23:42.646007107Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" returns successfully" Jan 29 11:23:42.646293 containerd[1603]: time="2025-01-29T11:23:42.646266775Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\"" Jan 29 11:23:42.646382 containerd[1603]: time="2025-01-29T11:23:42.646355703Z" level=info msg="TearDown network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" successfully" Jan 29 11:23:42.646416 containerd[1603]: time="2025-01-29T11:23:42.646379688Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" returns successfully" Jan 29 11:23:42.647151 containerd[1603]: time="2025-01-29T11:23:42.647125329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:5,}" Jan 29 11:23:42.892613 containerd[1603]: time="2025-01-29T11:23:42.892465266Z" level=error msg="Failed to destroy network for sandbox \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.893220626Z" level=error msg="encountered an error cleaning up failed sandbox \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.893278685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.907270417Z" level=error msg="Failed to destroy network for sandbox \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.907732085Z" level=error msg="encountered an error cleaning up failed sandbox \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.907795013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.909997004Z" level=error msg="Failed to destroy network for sandbox \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.910368803Z" level=error msg="encountered an error cleaning up failed sandbox \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.910418958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.913109627Z" level=error msg="Failed to destroy network for sandbox \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.913510843Z" level=error msg="encountered an error cleaning up failed sandbox \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.913542732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.914454887Z" level=error msg="Failed to destroy network for sandbox \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.914813592Z" level=error msg="encountered an error cleaning up failed sandbox \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.072784 containerd[1603]: time="2025-01-29T11:23:42.914843518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnqjx,Uid:35088270-b85c-4fff-9f47-df92a059da0a,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.073264 kubelet[2833]: E0129 11:23:42.893528 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.073264 kubelet[2833]: E0129 11:23:42.893589 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:43.073264 kubelet[2833]: E0129 11:23:42.893617 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" Jan 29 11:23:43.073365 kubelet[2833]: E0129 11:23:42.893847 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9f5bc8c-kfgmw_calico-apiserver(10ddf1c0-21b1-4d7e-af9d-b4ca369b7742)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9f5bc8c-kfgmw_calico-apiserver(10ddf1c0-21b1-4d7e-af9d-b4ca369b7742)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" podUID="10ddf1c0-21b1-4d7e-af9d-b4ca369b7742" Jan 29 11:23:43.073365 kubelet[2833]: E0129 11:23:42.908365 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.073365 kubelet[2833]: E0129 11:23:42.908458 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:43.073461 kubelet[2833]: E0129 11:23:42.908478 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8qdw4" Jan 29 11:23:43.073461 kubelet[2833]: E0129 11:23:42.908515 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8qdw4_kube-system(622451af-befd-4d1a-89be-df128077d7a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8qdw4_kube-system(622451af-befd-4d1a-89be-df128077d7a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8qdw4" podUID="622451af-befd-4d1a-89be-df128077d7a6" Jan 29 11:23:43.073461 kubelet[2833]: E0129 11:23:42.910564 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.073549 kubelet[2833]: E0129 11:23:42.910591 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:43.073549 kubelet[2833]: E0129 11:23:42.910607 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" Jan 29 11:23:43.073549 kubelet[2833]: E0129 11:23:42.910680 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b9f5bc8c-l4mfg_calico-apiserver(337f98ac-1b65-4615-aa71-55b1dcfcd61e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b9f5bc8c-l4mfg_calico-apiserver(337f98ac-1b65-4615-aa71-55b1dcfcd61e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" podUID="337f98ac-1b65-4615-aa71-55b1dcfcd61e" Jan 29 11:23:43.073655 kubelet[2833]: E0129 11:23:42.913896 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.073655 kubelet[2833]: E0129 11:23:42.913954 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:43.073655 kubelet[2833]: E0129 11:23:42.913974 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2fch" Jan 29 11:23:43.073736 kubelet[2833]: E0129 11:23:42.914019 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q2fch_kube-system(254f70df-c108-425a-b324-8fe9c6bfe00e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q2fch_kube-system(254f70df-c108-425a-b324-8fe9c6bfe00e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q2fch" podUID="254f70df-c108-425a-b324-8fe9c6bfe00e" Jan 29 11:23:43.073736 kubelet[2833]: E0129 11:23:42.914970 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.073736 kubelet[2833]: E0129 11:23:42.915004 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:43.073834 kubelet[2833]: E0129 11:23:42.915019 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gnqjx" Jan 29 11:23:43.073834 kubelet[2833]: E0129 11:23:42.915052 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gnqjx_calico-system(35088270-b85c-4fff-9f47-df92a059da0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gnqjx_calico-system(35088270-b85c-4fff-9f47-df92a059da0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gnqjx" podUID="35088270-b85c-4fff-9f47-df92a059da0a" Jan 29 11:23:43.132081 containerd[1603]: time="2025-01-29T11:23:43.132029999Z" level=error msg="Failed to destroy network for sandbox \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.132636 containerd[1603]: time="2025-01-29T11:23:43.132594029Z" level=error msg="encountered an error cleaning up failed sandbox \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.132812 containerd[1603]: time="2025-01-29T11:23:43.132668549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.132962 kubelet[2833]: E0129 11:23:43.132918 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:23:43.133052 kubelet[2833]: E0129 11:23:43.132978 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:43.133052 kubelet[2833]: E0129 11:23:43.132998 2833 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" Jan 29 11:23:43.133107 kubelet[2833]: E0129 11:23:43.133039 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-695974dcd7-g2c9b_calico-system(530f2b50-c66a-4ebc-869f-eeb1d00efe6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-695974dcd7-g2c9b_calico-system(530f2b50-c66a-4ebc-869f-eeb1d00efe6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" podUID="530f2b50-c66a-4ebc-869f-eeb1d00efe6c" Jan 29 11:23:43.330426 containerd[1603]: time="2025-01-29T11:23:43.330371559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:43.331219 containerd[1603]: time="2025-01-29T11:23:43.331181192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:23:43.332369 containerd[1603]: time="2025-01-29T11:23:43.332322557Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:43.334256 containerd[1603]: time="2025-01-29T11:23:43.334210468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:43.334794 containerd[1603]: time="2025-01-29T11:23:43.334758287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.781626137s" Jan 29 11:23:43.334794 containerd[1603]: time="2025-01-29T11:23:43.334787722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:23:43.342130 containerd[1603]: time="2025-01-29T11:23:43.342088129Z" level=info msg="CreateContainer within sandbox \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:23:43.364465 containerd[1603]: time="2025-01-29T11:23:43.364426389Z" level=info msg="CreateContainer within sandbox \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\"" Jan 29 11:23:43.364915 containerd[1603]: time="2025-01-29T11:23:43.364880673Z" level=info msg="StartContainer for \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\"" Jan 29 11:23:43.542570 containerd[1603]: time="2025-01-29T11:23:43.542459997Z" level=info msg="StartContainer for \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\" returns successfully" Jan 29 11:23:43.544796 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:23:43.544858 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:23:43.550336 systemd[1]: run-netns-cni\x2dc01cef9a\x2dab0d\x2d49e9\x2db4f2\x2d6bf49b380b7b.mount: Deactivated successfully. Jan 29 11:23:43.550510 systemd[1]: run-netns-cni\x2d95617f11\x2dd234\x2d5bb1\x2dff15\x2d227323d3af6c.mount: Deactivated successfully. Jan 29 11:23:43.550638 systemd[1]: run-netns-cni\x2da90c2a6e\x2dcce8\x2dde52\x2d5636\x2d2ff0d2e6b36d.mount: Deactivated successfully. Jan 29 11:23:43.550792 systemd[1]: run-netns-cni\x2dff51dfff\x2d6b2c\x2d4ab1\x2d932e\x2d72d385e2afd0.mount: Deactivated successfully. Jan 29 11:23:43.550925 systemd[1]: run-netns-cni\x2dc729b51f\x2daeba\x2df9b5\x2dcc2a\x2d03a46868196e.mount: Deactivated successfully. Jan 29 11:23:43.551094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652762078.mount: Deactivated successfully. Jan 29 11:23:43.646743 kubelet[2833]: I0129 11:23:43.646614 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e" Jan 29 11:23:43.651395 containerd[1603]: time="2025-01-29T11:23:43.647280630Z" level=info msg="StopPodSandbox for \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\"" Jan 29 11:23:43.651395 containerd[1603]: time="2025-01-29T11:23:43.647464075Z" level=info msg="Ensure that sandbox 8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e in task-service has been cleanup successfully" Jan 29 11:23:43.651395 containerd[1603]: time="2025-01-29T11:23:43.647800548Z" level=info msg="TearDown network for sandbox \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\" successfully" Jan 29 11:23:43.651395 containerd[1603]: time="2025-01-29T11:23:43.647813122Z" level=info msg="StopPodSandbox for \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\" returns successfully" Jan 29 11:23:43.651395 containerd[1603]: time="2025-01-29T11:23:43.650926856Z" level=info msg="StopPodSandbox for \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\"" Jan 29 11:23:43.651395 containerd[1603]: time="2025-01-29T11:23:43.650998671Z" level=info msg="TearDown network for sandbox \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\" successfully" Jan 29 11:23:43.651395 containerd[1603]: time="2025-01-29T11:23:43.651024940Z" level=info msg="StopPodSandbox for \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\" returns successfully" Jan 29 11:23:43.651343 systemd[1]: run-netns-cni\x2d81352a65\x2db7c4\x2d3f1b\x2dedeb\x2da23d39c6b874.mount: Deactivated successfully. Jan 29 11:23:43.652129 containerd[1603]: time="2025-01-29T11:23:43.651431374Z" level=info msg="StopPodSandbox for \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\"" Jan 29 11:23:43.652129 containerd[1603]: time="2025-01-29T11:23:43.651564014Z" level=info msg="TearDown network for sandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\" successfully" Jan 29 11:23:43.652129 containerd[1603]: time="2025-01-29T11:23:43.651576718Z" level=info msg="StopPodSandbox for \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\" returns successfully" Jan 29 11:23:43.652129 containerd[1603]: time="2025-01-29T11:23:43.651999483Z" level=info msg="StopPodSandbox for \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\"" Jan 29 11:23:43.652129 containerd[1603]: time="2025-01-29T11:23:43.652080515Z" level=info msg="TearDown network for sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\" successfully" Jan 29 11:23:43.652129 containerd[1603]: time="2025-01-29T11:23:43.652092577Z" level=info msg="StopPodSandbox for \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\" returns successfully" Jan 29 11:23:43.652485 containerd[1603]: time="2025-01-29T11:23:43.652455931Z" level=info msg="StopPodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\"" Jan 29 11:23:43.652571 containerd[1603]: time="2025-01-29T11:23:43.652545368Z" level=info msg="TearDown network for sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" successfully" Jan 29 11:23:43.652571 containerd[1603]: time="2025-01-29T11:23:43.652561900Z" level=info msg="StopPodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" returns successfully" Jan 29 11:23:43.653008 containerd[1603]: time="2025-01-29T11:23:43.652940772Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\"" Jan 29 11:23:43.653036 containerd[1603]: time="2025-01-29T11:23:43.653016054Z" level=info msg="TearDown network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" successfully" Jan 29 11:23:43.653036 containerd[1603]: time="2025-01-29T11:23:43.653026263Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" returns successfully" Jan 29 11:23:43.653364 kubelet[2833]: I0129 11:23:43.653302 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344" Jan 29 11:23:43.653667 containerd[1603]: time="2025-01-29T11:23:43.653610071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:6,}" Jan 29 11:23:43.654178 containerd[1603]: time="2025-01-29T11:23:43.654135238Z" level=info msg="StopPodSandbox for \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\"" Jan 29 11:23:43.654410 containerd[1603]: time="2025-01-29T11:23:43.654382683Z" level=info msg="Ensure that sandbox 150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344 in task-service has been cleanup successfully" Jan 29 11:23:43.654713 containerd[1603]: time="2025-01-29T11:23:43.654615851Z" level=info msg="TearDown network for sandbox \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\" successfully" Jan 29 11:23:43.654713 containerd[1603]: time="2025-01-29T11:23:43.654702515Z" level=info msg="StopPodSandbox for \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\" returns successfully" Jan 29 11:23:43.655229 containerd[1603]: time="2025-01-29T11:23:43.655196443Z" level=info msg="StopPodSandbox for \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\"" Jan 29 11:23:43.655331 containerd[1603]: time="2025-01-29T11:23:43.655300299Z" level=info msg="TearDown network for sandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\" successfully" Jan 29 11:23:43.655331 containerd[1603]: time="2025-01-29T11:23:43.655325847Z" level=info msg="StopPodSandbox for \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\" returns successfully" Jan 29 11:23:43.655849 containerd[1603]: time="2025-01-29T11:23:43.655675234Z" level=info msg="StopPodSandbox for \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\"" Jan 29 11:23:43.655849 containerd[1603]: time="2025-01-29T11:23:43.655803154Z" level=info msg="TearDown network for sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\" successfully" Jan 29 11:23:43.655849 containerd[1603]: time="2025-01-29T11:23:43.655815146Z" level=info msg="StopPodSandbox for \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\" returns successfully" Jan 29 11:23:43.656417 containerd[1603]: time="2025-01-29T11:23:43.656400337Z" level=info msg="StopPodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\"" Jan 29 11:23:43.656681 containerd[1603]: time="2025-01-29T11:23:43.656541142Z" level=info msg="TearDown network for sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" successfully" Jan 29 11:23:43.656681 containerd[1603]: time="2025-01-29T11:23:43.656556130Z" level=info msg="StopPodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" returns successfully" Jan 29 11:23:43.657504 containerd[1603]: time="2025-01-29T11:23:43.657411308Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\"" Jan 29 11:23:43.657631 containerd[1603]: time="2025-01-29T11:23:43.657511737Z" level=info msg="TearDown network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" successfully" Jan 29 11:23:43.657631 containerd[1603]: time="2025-01-29T11:23:43.657524932Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" returns successfully" Jan 29 11:23:43.658259 kubelet[2833]: I0129 11:23:43.657860 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12" Jan 29 11:23:43.657931 systemd[1]: run-netns-cni\x2d881e8c46\x2d6caa\x2d6c45\x2da077\x2de8b52ce550da.mount: Deactivated successfully. Jan 29 11:23:43.659059 containerd[1603]: time="2025-01-29T11:23:43.659018359Z" level=info msg="StopPodSandbox for \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\"" Jan 29 11:23:43.659306 containerd[1603]: time="2025-01-29T11:23:43.659207094Z" level=info msg="Ensure that sandbox 0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12 in task-service has been cleanup successfully" Jan 29 11:23:43.659306 containerd[1603]: time="2025-01-29T11:23:43.659046502Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\"" Jan 29 11:23:43.659849 containerd[1603]: time="2025-01-29T11:23:43.659391301Z" level=info msg="TearDown network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" successfully" Jan 29 11:23:43.659849 containerd[1603]: time="2025-01-29T11:23:43.659405708Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" returns successfully" Jan 29 11:23:43.659955 containerd[1603]: time="2025-01-29T11:23:43.659868027Z" level=info msg="TearDown network for sandbox \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\" successfully" Jan 29 11:23:43.659955 containerd[1603]: time="2025-01-29T11:23:43.659883025Z" level=info msg="StopPodSandbox for \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\" returns successfully" Jan 29 11:23:43.660031 containerd[1603]: time="2025-01-29T11:23:43.660007309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:6,}" Jan 29 11:23:43.660277 containerd[1603]: time="2025-01-29T11:23:43.660252971Z" level=info msg="StopPodSandbox for \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\"" Jan 29 11:23:43.660508 containerd[1603]: time="2025-01-29T11:23:43.660431006Z" level=info msg="TearDown network for sandbox \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\" successfully" Jan 29 11:23:43.660508 containerd[1603]: time="2025-01-29T11:23:43.660449200Z" level=info msg="StopPodSandbox for \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\" returns successfully" Jan 29 11:23:43.660753 containerd[1603]: time="2025-01-29T11:23:43.660710751Z" level=info msg="StopPodSandbox for \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\"" Jan 29 11:23:43.660904 containerd[1603]: time="2025-01-29T11:23:43.660839464Z" level=info msg="TearDown network for sandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\" successfully" Jan 29 11:23:43.660904 containerd[1603]: time="2025-01-29T11:23:43.660853370Z" level=info msg="StopPodSandbox for \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\" returns successfully" Jan 29 11:23:43.661387 containerd[1603]: time="2025-01-29T11:23:43.661360643Z" level=info msg="StopPodSandbox for \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\"" Jan 29 11:23:43.662213 containerd[1603]: time="2025-01-29T11:23:43.661451444Z" level=info msg="TearDown network for sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\" successfully" Jan 29 11:23:43.662213 containerd[1603]: time="2025-01-29T11:23:43.661466543Z" level=info msg="StopPodSandbox for \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\" returns successfully" Jan 29 11:23:43.662213 containerd[1603]: time="2025-01-29T11:23:43.661913954Z" level=info msg="StopPodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\"" Jan 29 11:23:43.662213 containerd[1603]: time="2025-01-29T11:23:43.661978455Z" level=info msg="TearDown network for sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" successfully" Jan 29 11:23:43.662213 containerd[1603]: time="2025-01-29T11:23:43.661987231Z" level=info msg="StopPodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" returns successfully" Jan 29 11:23:43.662495 containerd[1603]: time="2025-01-29T11:23:43.662445332Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\"" Jan 29 11:23:43.662545 containerd[1603]: time="2025-01-29T11:23:43.662528229Z" level=info msg="TearDown network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" successfully" Jan 29 11:23:43.662578 containerd[1603]: time="2025-01-29T11:23:43.662543517Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" returns successfully" Jan 29 11:23:43.662756 kubelet[2833]: E0129 11:23:43.662720 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:43.663023 containerd[1603]: time="2025-01-29T11:23:43.662948058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:6,}" Jan 29 11:23:43.663184 systemd[1]: run-netns-cni\x2dd5a1493f\x2dce0e\x2d583b\x2d1102\x2d1a9e90fe2415.mount: Deactivated successfully. Jan 29 11:23:43.663305 kubelet[2833]: I0129 11:23:43.663231 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393" Jan 29 11:23:43.663888 containerd[1603]: time="2025-01-29T11:23:43.663864571Z" level=info msg="StopPodSandbox for \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\"" Jan 29 11:23:43.664041 containerd[1603]: time="2025-01-29T11:23:43.664014082Z" level=info msg="Ensure that sandbox 0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393 in task-service has been cleanup successfully" Jan 29 11:23:43.664343 containerd[1603]: time="2025-01-29T11:23:43.664322803Z" level=info msg="TearDown network for sandbox \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\" successfully" Jan 29 11:23:43.664343 containerd[1603]: time="2025-01-29T11:23:43.664339925Z" level=info msg="StopPodSandbox for \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\" returns successfully" Jan 29 11:23:43.664696 containerd[1603]: time="2025-01-29T11:23:43.664675165Z" level=info msg="StopPodSandbox for \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\"" Jan 29 11:23:43.664774 containerd[1603]: time="2025-01-29T11:23:43.664755697Z" level=info msg="TearDown network for sandbox \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\" successfully" Jan 29 11:23:43.664774 containerd[1603]: time="2025-01-29T11:23:43.664770725Z" level=info msg="StopPodSandbox for \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\" returns successfully" Jan 29 11:23:43.665118 containerd[1603]: time="2025-01-29T11:23:43.665092009Z" level=info msg="StopPodSandbox for \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\"" Jan 29 11:23:43.665218 containerd[1603]: time="2025-01-29T11:23:43.665200423Z" level=info msg="TearDown network for sandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\" successfully" Jan 29 11:23:43.665218 containerd[1603]: time="2025-01-29T11:23:43.665214710Z" level=info msg="StopPodSandbox for \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\" returns successfully" Jan 29 11:23:43.665578 containerd[1603]: time="2025-01-29T11:23:43.665545021Z" level=info msg="StopPodSandbox for \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\"" Jan 29 11:23:43.665672 containerd[1603]: time="2025-01-29T11:23:43.665615874Z" level=info msg="TearDown network for sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\" successfully" Jan 29 11:23:43.665672 containerd[1603]: time="2025-01-29T11:23:43.665628678Z" level=info msg="StopPodSandbox for \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\" returns successfully" Jan 29 11:23:43.665977 containerd[1603]: time="2025-01-29T11:23:43.665952066Z" level=info msg="StopPodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\"" Jan 29 11:23:43.666057 containerd[1603]: time="2025-01-29T11:23:43.666039440Z" level=info msg="TearDown network for sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" successfully" Jan 29 11:23:43.666304 containerd[1603]: time="2025-01-29T11:23:43.666266026Z" level=info msg="StopPodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" returns successfully" Jan 29 11:23:43.666481 kubelet[2833]: I0129 11:23:43.666459 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537" Jan 29 11:23:43.666991 containerd[1603]: time="2025-01-29T11:23:43.666942388Z" level=info msg="StopPodSandbox for \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\"" Jan 29 11:23:43.666991 containerd[1603]: time="2025-01-29T11:23:43.666955623Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\"" Jan 29 11:23:43.667168 containerd[1603]: time="2025-01-29T11:23:43.667031786Z" level=info msg="TearDown network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" successfully" Jan 29 11:23:43.667168 containerd[1603]: time="2025-01-29T11:23:43.667040893Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" returns successfully" Jan 29 11:23:43.667168 containerd[1603]: time="2025-01-29T11:23:43.667085718Z" level=info msg="Ensure that sandbox 4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537 in task-service has been cleanup successfully" Jan 29 11:23:43.667305 containerd[1603]: time="2025-01-29T11:23:43.667287868Z" level=info msg="TearDown network for sandbox \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\" successfully" Jan 29 11:23:43.667305 containerd[1603]: time="2025-01-29T11:23:43.667300752Z" level=info msg="StopPodSandbox for \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\" returns successfully" Jan 29 11:23:43.667207 systemd[1]: run-netns-cni\x2d74b86250\x2d5924\x2dda6b\x2d2e26\x2d04c0a700ac18.mount: Deactivated successfully. Jan 29 11:23:43.667696 containerd[1603]: time="2025-01-29T11:23:43.667677440Z" level=info msg="StopPodSandbox for \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\"" Jan 29 11:23:43.667893 containerd[1603]: time="2025-01-29T11:23:43.667814587Z" level=info msg="TearDown network for sandbox \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\" successfully" Jan 29 11:23:43.667893 containerd[1603]: time="2025-01-29T11:23:43.667828414Z" level=info msg="StopPodSandbox for \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\" returns successfully" Jan 29 11:23:43.668023 containerd[1603]: time="2025-01-29T11:23:43.667994205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:6,}" Jan 29 11:23:43.668226 containerd[1603]: time="2025-01-29T11:23:43.668189092Z" level=info msg="StopPodSandbox for \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\"" Jan 29 11:23:43.668309 containerd[1603]: time="2025-01-29T11:23:43.668271327Z" level=info msg="TearDown network for sandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\" successfully" Jan 29 11:23:43.668309 containerd[1603]: time="2025-01-29T11:23:43.668286886Z" level=info msg="StopPodSandbox for \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\" returns successfully" Jan 29 11:23:43.668529 containerd[1603]: time="2025-01-29T11:23:43.668503604Z" level=info msg="StopPodSandbox for \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\"" Jan 29 11:23:43.668747 containerd[1603]: time="2025-01-29T11:23:43.668633317Z" level=info msg="TearDown network for sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\" successfully" Jan 29 11:23:43.668747 containerd[1603]: time="2025-01-29T11:23:43.668729448Z" level=info msg="StopPodSandbox for \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\" returns successfully" Jan 29 11:23:43.668963 containerd[1603]: time="2025-01-29T11:23:43.668938892Z" level=info msg="StopPodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\"" Jan 29 11:23:43.669033 containerd[1603]: time="2025-01-29T11:23:43.669023791Z" level=info msg="TearDown network for sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" successfully" Jan 29 11:23:43.669057 containerd[1603]: time="2025-01-29T11:23:43.669034972Z" level=info msg="StopPodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" returns successfully" Jan 29 11:23:43.669264 containerd[1603]: time="2025-01-29T11:23:43.669238224Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\"" Jan 29 11:23:43.669361 containerd[1603]: time="2025-01-29T11:23:43.669312995Z" level=info msg="TearDown network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" successfully" Jan 29 11:23:43.669361 containerd[1603]: time="2025-01-29T11:23:43.669323144Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" returns successfully" Jan 29 11:23:43.669700 kubelet[2833]: E0129 11:23:43.669576 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:43.669897 kubelet[2833]: E0129 11:23:43.669871 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:43.670054 containerd[1603]: time="2025-01-29T11:23:43.670027829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:6,}" Jan 29 11:23:43.671540 kubelet[2833]: I0129 11:23:43.671518 2833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221" Jan 29 11:23:43.671981 containerd[1603]: time="2025-01-29T11:23:43.671956987Z" level=info msg="StopPodSandbox for \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\"" Jan 29 11:23:43.672184 containerd[1603]: time="2025-01-29T11:23:43.672165910Z" level=info msg="Ensure that sandbox 2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221 in task-service has been cleanup successfully" Jan 29 11:23:43.672340 containerd[1603]: time="2025-01-29T11:23:43.672323074Z" level=info msg="TearDown network for sandbox \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\" successfully" Jan 29 11:23:43.672368 containerd[1603]: time="2025-01-29T11:23:43.672339084Z" level=info msg="StopPodSandbox for \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\" returns successfully" Jan 29 11:23:43.672612 containerd[1603]: time="2025-01-29T11:23:43.672584917Z" level=info msg="StopPodSandbox for \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\"" Jan 29 11:23:43.672697 containerd[1603]: time="2025-01-29T11:23:43.672688542Z" level=info msg="TearDown network for sandbox \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\" successfully" Jan 29 11:23:43.672731 containerd[1603]: time="2025-01-29T11:23:43.672699021Z" level=info msg="StopPodSandbox for \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\" returns successfully" Jan 29 11:23:43.672998 containerd[1603]: time="2025-01-29T11:23:43.672972315Z" level=info msg="StopPodSandbox for \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\"" Jan 29 11:23:43.673080 containerd[1603]: time="2025-01-29T11:23:43.673061884Z" level=info msg="TearDown network for sandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\" successfully" Jan 29 11:23:43.673108 containerd[1603]: time="2025-01-29T11:23:43.673078225Z" level=info msg="StopPodSandbox for \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\" returns successfully" Jan 29 11:23:43.673367 containerd[1603]: time="2025-01-29T11:23:43.673342100Z" level=info msg="StopPodSandbox for \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\"" Jan 29 11:23:43.673435 containerd[1603]: time="2025-01-29T11:23:43.673427401Z" level=info msg="TearDown network for sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\" successfully" Jan 29 11:23:43.673460 containerd[1603]: time="2025-01-29T11:23:43.673438632Z" level=info msg="StopPodSandbox for \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\" returns successfully" Jan 29 11:23:43.673702 containerd[1603]: time="2025-01-29T11:23:43.673679485Z" level=info msg="StopPodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\"" Jan 29 11:23:43.673870 containerd[1603]: time="2025-01-29T11:23:43.673842241Z" level=info msg="TearDown network for sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" successfully" Jan 29 11:23:43.673870 containerd[1603]: time="2025-01-29T11:23:43.673858943Z" level=info msg="StopPodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" returns successfully" Jan 29 11:23:43.674245 containerd[1603]: time="2025-01-29T11:23:43.674208761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnqjx,Uid:35088270-b85c-4fff-9f47-df92a059da0a,Namespace:calico-system,Attempt:5,}" Jan 29 11:23:44.553007 systemd[1]: run-netns-cni\x2d82345137\x2dd3bc\x2db3b3\x2d938d\x2de6db411f1321.mount: Deactivated successfully. Jan 29 11:23:44.553433 systemd[1]: run-netns-cni\x2d9836c9c1\x2df6b7\x2de0c9\x2d890e\x2d67262835f848.mount: Deactivated successfully. Jan 29 11:23:44.562961 systemd-networkd[1252]: cali2ce8e5616f6: Link UP Jan 29 11:23:44.563256 systemd-networkd[1252]: cali2ce8e5616f6: Gained carrier Jan 29 11:23:44.571901 kubelet[2833]: I0129 11:23:44.571701 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dk7q6" podStartSLOduration=2.396273237 podStartE2EDuration="17.571685198s" podCreationTimestamp="2025-01-29 11:23:27 +0000 UTC" firstStartedPulling="2025-01-29 11:23:28.159971121 +0000 UTC m=+21.965027226" lastFinishedPulling="2025-01-29 11:23:43.335383082 +0000 UTC m=+37.140439187" observedRunningTime="2025-01-29 11:23:43.997614706 +0000 UTC m=+37.802670831" watchObservedRunningTime="2025-01-29 11:23:44.571685198 +0000 UTC m=+38.376741303" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.434 [INFO][4971] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.447 [INFO][4971] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gnqjx-eth0 csi-node-driver- calico-system 35088270-b85c-4fff-9f47-df92a059da0a 603 0 2025-01-29 11:23:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gnqjx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2ce8e5616f6 [] []}} ContainerID="bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" Namespace="calico-system" Pod="csi-node-driver-gnqjx" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnqjx-" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.447 [INFO][4971] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" Namespace="calico-system" Pod="csi-node-driver-gnqjx" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnqjx-eth0" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.515 [INFO][5067] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" HandleID="k8s-pod-network.bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" Workload="localhost-k8s-csi--node--driver--gnqjx-eth0" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.528 [INFO][5067] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" HandleID="k8s-pod-network.bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" Workload="localhost-k8s-csi--node--driver--gnqjx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027c5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gnqjx", "timestamp":"2025-01-29 11:23:44.515231761 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.528 [INFO][5067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.528 [INFO][5067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.528 [INFO][5067] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.530 [INFO][5067] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" host="localhost" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.536 [INFO][5067] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.539 [INFO][5067] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.541 [INFO][5067] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.542 [INFO][5067] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.542 [INFO][5067] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" host="localhost" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.543 [INFO][5067] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.547 [INFO][5067] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" host="localhost" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.551 [INFO][5067] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" host="localhost" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.551 [INFO][5067] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" host="localhost" Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.551 [INFO][5067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:23:44.576671 containerd[1603]: 2025-01-29 11:23:44.551 [INFO][5067] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" HandleID="k8s-pod-network.bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" Workload="localhost-k8s-csi--node--driver--gnqjx-eth0" Jan 29 11:23:44.577277 containerd[1603]: 2025-01-29 11:23:44.554 [INFO][4971] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" Namespace="calico-system" Pod="csi-node-driver-gnqjx" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnqjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gnqjx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35088270-b85c-4fff-9f47-df92a059da0a", ResourceVersion:"603", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gnqjx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ce8e5616f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:23:44.577277 containerd[1603]: 2025-01-29 11:23:44.555 [INFO][4971] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" Namespace="calico-system" Pod="csi-node-driver-gnqjx" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnqjx-eth0" Jan 29 11:23:44.577277 containerd[1603]: 2025-01-29 11:23:44.555 [INFO][4971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ce8e5616f6 ContainerID="bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" Namespace="calico-system" Pod="csi-node-driver-gnqjx" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnqjx-eth0" Jan 29 11:23:44.577277 containerd[1603]: 2025-01-29 11:23:44.562 [INFO][4971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" Namespace="calico-system" Pod="csi-node-driver-gnqjx" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnqjx-eth0" Jan 29 11:23:44.577277 containerd[1603]: 2025-01-29 11:23:44.563 [INFO][4971] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" Namespace="calico-system" Pod="csi-node-driver-gnqjx" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnqjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gnqjx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35088270-b85c-4fff-9f47-df92a059da0a", ResourceVersion:"603", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f", Pod:"csi-node-driver-gnqjx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ce8e5616f6", MAC:"2a:07:56:a8:66:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:23:44.577277 containerd[1603]: 2025-01-29 11:23:44.573 [INFO][4971] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f" Namespace="calico-system" Pod="csi-node-driver-gnqjx" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnqjx-eth0" Jan 29 11:23:44.588583 systemd-networkd[1252]: calicff975e505b: Link UP Jan 29 11:23:44.589197 systemd-networkd[1252]: calicff975e505b: Gained carrier Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.450 [INFO][4980] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.459 [INFO][4980] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0 coredns-7db6d8ff4d- kube-system 622451af-befd-4d1a-89be-df128077d7a6 768 0 2025-01-29 11:23:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-8qdw4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicff975e505b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8qdw4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8qdw4-" Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.459 [INFO][4980] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8qdw4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0" Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.515 [INFO][5074] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" HandleID="k8s-pod-network.a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" Workload="localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0" Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.528 [INFO][5074] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" HandleID="k8s-pod-network.a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" Workload="localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051c80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-8qdw4", "timestamp":"2025-01-29 11:23:44.515079385 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.528 [INFO][5074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.552 [INFO][5074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.553 [INFO][5074] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.556 [INFO][5074] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" host="localhost" Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.559 [INFO][5074] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.564 [INFO][5074] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.566 [INFO][5074] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.567 [INFO][5074] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.567 [INFO][5074] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" host="localhost" Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.572 [INFO][5074] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7 Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.577 [INFO][5074] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" host="localhost" Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.582 [INFO][5074] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" host="localhost" Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.583 [INFO][5074] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" host="localhost" Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.583 [INFO][5074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:23:44.602860 containerd[1603]: 2025-01-29 11:23:44.583 [INFO][5074] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" HandleID="k8s-pod-network.a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" Workload="localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0" Jan 29 11:23:44.603417 containerd[1603]: 2025-01-29 11:23:44.586 [INFO][4980] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8qdw4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"622451af-befd-4d1a-89be-df128077d7a6", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-8qdw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicff975e505b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:23:44.603417 containerd[1603]: 2025-01-29 11:23:44.586 [INFO][4980] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8qdw4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0" Jan 29 11:23:44.603417 containerd[1603]: 2025-01-29 11:23:44.586 [INFO][4980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicff975e505b ContainerID="a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8qdw4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0" Jan 29 11:23:44.603417 containerd[1603]: 2025-01-29 11:23:44.589 [INFO][4980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8qdw4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0" Jan 29 11:23:44.603417 containerd[1603]: 2025-01-29 11:23:44.589 [INFO][4980] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8qdw4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"622451af-befd-4d1a-89be-df128077d7a6", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7", Pod:"coredns-7db6d8ff4d-8qdw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicff975e505b", MAC:"0a:5e:ff:9c:cc:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:23:44.603417 containerd[1603]: 2025-01-29 11:23:44.599 [INFO][4980] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8qdw4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8qdw4-eth0" Jan 29 11:23:44.618630 systemd-networkd[1252]: cali44476355121: Link UP Jan 29 11:23:44.619147 systemd-networkd[1252]: cali44476355121: Gained carrier Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.394 [INFO][4959] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.423 [INFO][4959] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0 calico-apiserver-b9f5bc8c- calico-apiserver 337f98ac-1b65-4615-aa71-55b1dcfcd61e 769 0 2025-01-29 11:23:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b9f5bc8c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b9f5bc8c-l4mfg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali44476355121 [] []}} ContainerID="dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-l4mfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-" Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.423 [INFO][4959] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-l4mfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0" Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.516 [INFO][5045] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" HandleID="k8s-pod-network.dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" Workload="localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0" Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.529 [INFO][5045] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" HandleID="k8s-pod-network.dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" Workload="localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004323f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b9f5bc8c-l4mfg", "timestamp":"2025-01-29 11:23:44.51661407 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.529 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.583 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.583 [INFO][5045] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.584 [INFO][5045] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" host="localhost" Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.588 [INFO][5045] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.593 [INFO][5045] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.597 [INFO][5045] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.600 [INFO][5045] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.600 [INFO][5045] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" host="localhost" Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.601 [INFO][5045] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2 Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.605 [INFO][5045] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" host="localhost" Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.611 [INFO][5045] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" host="localhost" Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.611 [INFO][5045] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" host="localhost" Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.611 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:23:44.634495 containerd[1603]: 2025-01-29 11:23:44.611 [INFO][5045] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" HandleID="k8s-pod-network.dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" Workload="localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0" Jan 29 11:23:44.635230 containerd[1603]: 2025-01-29 11:23:44.614 [INFO][4959] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-l4mfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0", GenerateName:"calico-apiserver-b9f5bc8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"337f98ac-1b65-4615-aa71-55b1dcfcd61e", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b9f5bc8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b9f5bc8c-l4mfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44476355121", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:23:44.635230 containerd[1603]: 2025-01-29 11:23:44.614 [INFO][4959] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-l4mfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0" Jan 29 11:23:44.635230 containerd[1603]: 2025-01-29 11:23:44.614 [INFO][4959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44476355121 ContainerID="dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-l4mfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0" Jan 29 11:23:44.635230 containerd[1603]: 2025-01-29 11:23:44.619 [INFO][4959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-l4mfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0" Jan 29 11:23:44.635230 containerd[1603]: 2025-01-29 11:23:44.622 [INFO][4959] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-l4mfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0", GenerateName:"calico-apiserver-b9f5bc8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"337f98ac-1b65-4615-aa71-55b1dcfcd61e", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b9f5bc8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2", Pod:"calico-apiserver-b9f5bc8c-l4mfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44476355121", MAC:"5e:2b:87:74:0b:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:23:44.635230 containerd[1603]: 2025-01-29 11:23:44.629 [INFO][4959] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-l4mfg" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--l4mfg-eth0" Jan 29 11:23:44.638801 containerd[1603]: time="2025-01-29T11:23:44.638209576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:44.638801 containerd[1603]: time="2025-01-29T11:23:44.638319744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:44.638801 containerd[1603]: time="2025-01-29T11:23:44.638335924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:44.638801 containerd[1603]: time="2025-01-29T11:23:44.638476568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:44.646182 containerd[1603]: time="2025-01-29T11:23:44.645747386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:44.646182 containerd[1603]: time="2025-01-29T11:23:44.645795086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:44.646182 containerd[1603]: time="2025-01-29T11:23:44.645808982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:44.646182 containerd[1603]: time="2025-01-29T11:23:44.645887630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:44.665863 systemd-networkd[1252]: cali3a1be07509d: Link UP Jan 29 11:23:44.668917 systemd-networkd[1252]: cali3a1be07509d: Gained carrier Jan 29 11:23:44.671890 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:23:44.674872 kubelet[2833]: E0129 11:23:44.674849 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:44.694551 containerd[1603]: time="2025-01-29T11:23:44.694102879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:44.694551 containerd[1603]: time="2025-01-29T11:23:44.694267780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:44.694551 containerd[1603]: time="2025-01-29T11:23:44.694297235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:44.695173 containerd[1603]: time="2025-01-29T11:23:44.694604041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.424 [INFO][5002] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.438 [INFO][5002] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0 calico-apiserver-b9f5bc8c- calico-apiserver 10ddf1c0-21b1-4d7e-af9d-b4ca369b7742 771 0 2025-01-29 11:23:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b9f5bc8c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b9f5bc8c-kfgmw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3a1be07509d [] []}} ContainerID="ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-kfgmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-" Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.438 [INFO][5002] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-kfgmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0" Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.515 [INFO][5049] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" HandleID="k8s-pod-network.ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" Workload="localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0" Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.529 [INFO][5049] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" HandleID="k8s-pod-network.ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" Workload="localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000365590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b9f5bc8c-kfgmw", "timestamp":"2025-01-29 11:23:44.515417189 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.529 [INFO][5049] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.611 [INFO][5049] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.612 [INFO][5049] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.613 [INFO][5049] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" host="localhost" Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.618 [INFO][5049] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.622 [INFO][5049] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.623 [INFO][5049] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.625 [INFO][5049] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.625 [INFO][5049] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" host="localhost" Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.630 [INFO][5049] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.636 [INFO][5049] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" host="localhost" Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.647 [INFO][5049] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" host="localhost" Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.647 [INFO][5049] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" host="localhost" Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.648 [INFO][5049] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:23:44.701768 containerd[1603]: 2025-01-29 11:23:44.648 [INFO][5049] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" HandleID="k8s-pod-network.ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" Workload="localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0" Jan 29 11:23:44.702732 containerd[1603]: 2025-01-29 11:23:44.662 [INFO][5002] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-kfgmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0", GenerateName:"calico-apiserver-b9f5bc8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"10ddf1c0-21b1-4d7e-af9d-b4ca369b7742", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b9f5bc8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b9f5bc8c-kfgmw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a1be07509d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:23:44.702732 containerd[1603]: 2025-01-29 11:23:44.662 [INFO][5002] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-kfgmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0" Jan 29 11:23:44.702732 containerd[1603]: 2025-01-29 11:23:44.662 [INFO][5002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a1be07509d ContainerID="ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-kfgmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0" Jan 29 11:23:44.702732 containerd[1603]: 2025-01-29 11:23:44.670 [INFO][5002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-kfgmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0" Jan 29 11:23:44.702732 containerd[1603]: 2025-01-29 11:23:44.670 [INFO][5002] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-kfgmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0", GenerateName:"calico-apiserver-b9f5bc8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"10ddf1c0-21b1-4d7e-af9d-b4ca369b7742", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b9f5bc8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c", Pod:"calico-apiserver-b9f5bc8c-kfgmw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a1be07509d", MAC:"e2:8f:50:1d:a0:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:23:44.702732 containerd[1603]: 2025-01-29 11:23:44.687 [INFO][5002] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c" Namespace="calico-apiserver" Pod="calico-apiserver-b9f5bc8c-kfgmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--b9f5bc8c--kfgmw-eth0" Jan 29 11:23:44.710071 systemd-networkd[1252]: cali00f6c6bd503: Link UP Jan 29 11:23:44.711035 systemd-networkd[1252]: cali00f6c6bd503: Gained carrier Jan 29 11:23:44.724016 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.437 [INFO][4978] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.446 [INFO][4978] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0 calico-kube-controllers-695974dcd7- calico-system 530f2b50-c66a-4ebc-869f-eeb1d00efe6c 770 0 2025-01-29 11:23:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:695974dcd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-695974dcd7-g2c9b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali00f6c6bd503 [] []}} ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Namespace="calico-system" Pod="calico-kube-controllers-695974dcd7-g2c9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-" Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.446 [INFO][4978] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Namespace="calico-system" Pod="calico-kube-controllers-695974dcd7-g2c9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.518 [INFO][5055] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" HandleID="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Workload="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.529 [INFO][5055] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" HandleID="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Workload="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334f20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-695974dcd7-g2c9b", "timestamp":"2025-01-29 11:23:44.518893094 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.529 [INFO][5055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.647 [INFO][5055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.648 [INFO][5055] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.653 [INFO][5055] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" host="localhost" Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.658 [INFO][5055] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.664 [INFO][5055] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.665 [INFO][5055] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.668 [INFO][5055] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.668 [INFO][5055] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" host="localhost" Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.670 [INFO][5055] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74 Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.676 [INFO][5055] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" host="localhost" Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.685 [INFO][5055] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" host="localhost" Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.686 [INFO][5055] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" host="localhost" Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.688 [INFO][5055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:23:44.731793 containerd[1603]: 2025-01-29 11:23:44.688 [INFO][5055] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" HandleID="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Workload="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:23:44.732306 containerd[1603]: 2025-01-29 11:23:44.704 [INFO][4978] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Namespace="calico-system" Pod="calico-kube-controllers-695974dcd7-g2c9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0", GenerateName:"calico-kube-controllers-695974dcd7-", Namespace:"calico-system", SelfLink:"", UID:"530f2b50-c66a-4ebc-869f-eeb1d00efe6c", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"695974dcd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-695974dcd7-g2c9b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali00f6c6bd503", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:23:44.732306 containerd[1603]: 2025-01-29 11:23:44.705 [INFO][4978] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Namespace="calico-system" Pod="calico-kube-controllers-695974dcd7-g2c9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:23:44.732306 containerd[1603]: 2025-01-29 11:23:44.705 [INFO][4978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00f6c6bd503 ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Namespace="calico-system" Pod="calico-kube-controllers-695974dcd7-g2c9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:23:44.732306 containerd[1603]: 2025-01-29 11:23:44.709 [INFO][4978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Namespace="calico-system" Pod="calico-kube-controllers-695974dcd7-g2c9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:23:44.732306 containerd[1603]: 2025-01-29 11:23:44.710 [INFO][4978] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Namespace="calico-system" Pod="calico-kube-controllers-695974dcd7-g2c9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0", GenerateName:"calico-kube-controllers-695974dcd7-", Namespace:"calico-system", SelfLink:"", UID:"530f2b50-c66a-4ebc-869f-eeb1d00efe6c", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"695974dcd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74", Pod:"calico-kube-controllers-695974dcd7-g2c9b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali00f6c6bd503", MAC:"f2:07:21:bd:cf:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:23:44.732306 containerd[1603]: 2025-01-29 11:23:44.725 [INFO][4978] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Namespace="calico-system" Pod="calico-kube-controllers-695974dcd7-g2c9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:23:44.746094 containerd[1603]: time="2025-01-29T11:23:44.745846231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8qdw4,Uid:622451af-befd-4d1a-89be-df128077d7a6,Namespace:kube-system,Attempt:6,} returns sandbox id \"a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7\"" Jan 29 11:23:44.747775 kubelet[2833]: E0129 11:23:44.747305 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:44.750881 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:23:44.753537 containerd[1603]: time="2025-01-29T11:23:44.753505249Z" level=info msg="CreateContainer within sandbox \"a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:23:44.776509 systemd-networkd[1252]: cali578c7bd7f0f: Link UP Jan 29 11:23:44.777805 systemd-networkd[1252]: cali578c7bd7f0f: Gained carrier Jan 29 11:23:44.785378 containerd[1603]: time="2025-01-29T11:23:44.785341879Z" level=info msg="CreateContainer within sandbox \"a83d8999a02f4cec188330c66b7d74c8beb15179ee881caf881091d85233ebf7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39635a25244f1242b3d833a9a9e335d4c8aa6789ad4b4fe07046c75b81a6fbeb\"" Jan 29 11:23:44.789498 containerd[1603]: time="2025-01-29T11:23:44.789472704Z" level=info msg="StartContainer for \"39635a25244f1242b3d833a9a9e335d4c8aa6789ad4b4fe07046c75b81a6fbeb\"" Jan 29 11:23:44.793730 containerd[1603]: time="2025-01-29T11:23:44.792419875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:44.793730 containerd[1603]: time="2025-01-29T11:23:44.792572251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:44.793730 containerd[1603]: time="2025-01-29T11:23:44.792589424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:44.793730 containerd[1603]: time="2025-01-29T11:23:44.792843992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:44.795445 containerd[1603]: time="2025-01-29T11:23:44.795400598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnqjx,Uid:35088270-b85c-4fff-9f47-df92a059da0a,Namespace:calico-system,Attempt:5,} returns sandbox id \"bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f\"" Jan 29 11:23:44.797181 containerd[1603]: time="2025-01-29T11:23:44.797159414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-l4mfg,Uid:337f98ac-1b65-4615-aa71-55b1dcfcd61e,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2\"" Jan 29 11:23:44.798283 containerd[1603]: time="2025-01-29T11:23:44.798266015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:23:44.808029 containerd[1603]: time="2025-01-29T11:23:44.806031013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:44.808029 containerd[1603]: time="2025-01-29T11:23:44.806078682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:44.808029 containerd[1603]: time="2025-01-29T11:23:44.806088861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:44.808029 containerd[1603]: time="2025-01-29T11:23:44.806170584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.408 [INFO][4967] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.423 [INFO][4967] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0 coredns-7db6d8ff4d- kube-system 254f70df-c108-425a-b324-8fe9c6bfe00e 765 0 2025-01-29 11:23:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-q2fch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali578c7bd7f0f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2fch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q2fch-" Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.423 [INFO][4967] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2fch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0" Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.514 [INFO][5047] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" HandleID="k8s-pod-network.260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" Workload="localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0" Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.530 [INFO][5047] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" HandleID="k8s-pod-network.260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" Workload="localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000507c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-q2fch", "timestamp":"2025-01-29 11:23:44.514686145 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.530 [INFO][5047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.687 [INFO][5047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.687 [INFO][5047] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.692 [INFO][5047] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" host="localhost" Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.700 [INFO][5047] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.708 [INFO][5047] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.710 [INFO][5047] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.715 [INFO][5047] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.715 [INFO][5047] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" host="localhost" Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.721 [INFO][5047] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.734 [INFO][5047] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" host="localhost" Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.743 [INFO][5047] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" host="localhost" Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.743 [INFO][5047] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" host="localhost" Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.744 [INFO][5047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:23:44.808346 containerd[1603]: 2025-01-29 11:23:44.744 [INFO][5047] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" HandleID="k8s-pod-network.260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" Workload="localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0" Jan 29 11:23:44.809347 containerd[1603]: 2025-01-29 11:23:44.770 [INFO][4967] cni-plugin/k8s.go 386: Populated endpoint ContainerID="260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2fch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"254f70df-c108-425a-b324-8fe9c6bfe00e", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-q2fch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali578c7bd7f0f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:23:44.809347 containerd[1603]: 2025-01-29 11:23:44.770 [INFO][4967] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2fch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0" Jan 29 11:23:44.809347 containerd[1603]: 2025-01-29 11:23:44.770 [INFO][4967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali578c7bd7f0f ContainerID="260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2fch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0" Jan 29 11:23:44.809347 containerd[1603]: 2025-01-29 11:23:44.780 [INFO][4967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2fch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0" Jan 29 11:23:44.809347 containerd[1603]: 2025-01-29 11:23:44.783 [INFO][4967] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2fch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"254f70df-c108-425a-b324-8fe9c6bfe00e", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e", Pod:"coredns-7db6d8ff4d-q2fch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali578c7bd7f0f", MAC:"4a:48:25:37:72:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:23:44.809347 containerd[1603]: 2025-01-29 11:23:44.802 [INFO][4967] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2fch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q2fch-eth0" Jan 29 11:23:44.842065 containerd[1603]: time="2025-01-29T11:23:44.841629623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:23:44.842065 containerd[1603]: time="2025-01-29T11:23:44.841731816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:23:44.842065 containerd[1603]: time="2025-01-29T11:23:44.841746614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:44.842065 containerd[1603]: time="2025-01-29T11:23:44.841892938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:23:44.842291 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:23:44.845344 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:23:44.874503 containerd[1603]: time="2025-01-29T11:23:44.874451355Z" level=info msg="StartContainer for \"39635a25244f1242b3d833a9a9e335d4c8aa6789ad4b4fe07046c75b81a6fbeb\" returns successfully" Jan 29 11:23:44.877027 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:23:44.885001 containerd[1603]: time="2025-01-29T11:23:44.884950121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695974dcd7-g2c9b,Uid:530f2b50-c66a-4ebc-869f-eeb1d00efe6c,Namespace:calico-system,Attempt:6,} returns sandbox id \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\"" Jan 29 11:23:44.887808 containerd[1603]: time="2025-01-29T11:23:44.887152502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b9f5bc8c-kfgmw,Uid:10ddf1c0-21b1-4d7e-af9d-b4ca369b7742,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c\"" Jan 29 11:23:44.908239 containerd[1603]: time="2025-01-29T11:23:44.908191994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2fch,Uid:254f70df-c108-425a-b324-8fe9c6bfe00e,Namespace:kube-system,Attempt:6,} returns sandbox id \"260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e\"" Jan 29 11:23:44.908819 kubelet[2833]: E0129 11:23:44.908783 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:44.911587 containerd[1603]: time="2025-01-29T11:23:44.911494082Z" level=info msg="CreateContainer within sandbox \"260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:23:44.942911 containerd[1603]: time="2025-01-29T11:23:44.942822195Z" level=info msg="CreateContainer within sandbox \"260453053ab736c3f4a2e6a6d61f146e3a2e55b328fd70df8d298bbf4e4fb17e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01162c441e73b8a64418c6ef5131c772dd7555d487ca5c6aa2100de588cf3f75\"" Jan 29 11:23:44.943472 containerd[1603]: time="2025-01-29T11:23:44.943448703Z" level=info msg="StartContainer for \"01162c441e73b8a64418c6ef5131c772dd7555d487ca5c6aa2100de588cf3f75\"" Jan 29 11:23:45.004377 containerd[1603]: time="2025-01-29T11:23:45.004326364Z" level=info msg="StartContainer for \"01162c441e73b8a64418c6ef5131c772dd7555d487ca5c6aa2100de588cf3f75\" returns successfully" Jan 29 11:23:45.633789 systemd-networkd[1252]: cali44476355121: Gained IPv6LL Jan 29 11:23:45.682137 kubelet[2833]: E0129 11:23:45.682074 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:45.688601 kubelet[2833]: E0129 11:23:45.688564 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:45.691312 kubelet[2833]: I0129 11:23:45.691258 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8qdw4" podStartSLOduration=23.691244572 podStartE2EDuration="23.691244572s" podCreationTimestamp="2025-01-29 11:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:23:45.690915163 +0000 UTC m=+39.495971268" watchObservedRunningTime="2025-01-29 11:23:45.691244572 +0000 UTC m=+39.496300677" Jan 29 11:23:45.705669 kubelet[2833]: I0129 11:23:45.703217 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-q2fch" podStartSLOduration=23.703201827 podStartE2EDuration="23.703201827s" podCreationTimestamp="2025-01-29 11:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:23:45.702877337 +0000 UTC m=+39.507933462" watchObservedRunningTime="2025-01-29 11:23:45.703201827 +0000 UTC m=+39.508257932" Jan 29 11:23:45.953754 systemd-networkd[1252]: cali2ce8e5616f6: Gained IPv6LL Jan 29 11:23:46.081853 systemd-networkd[1252]: calicff975e505b: Gained IPv6LL Jan 29 11:23:46.209766 systemd-networkd[1252]: cali578c7bd7f0f: Gained IPv6LL Jan 29 11:23:46.337771 systemd-networkd[1252]: cali00f6c6bd503: Gained IPv6LL Jan 29 11:23:46.401742 systemd-networkd[1252]: cali3a1be07509d: Gained IPv6LL Jan 29 11:23:46.585210 systemd[1]: Started sshd@9-10.0.0.145:22-10.0.0.1:59500.service - OpenSSH per-connection server daemon (10.0.0.1:59500). Jan 29 11:23:46.631926 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 59500 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:23:46.634237 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:46.642735 systemd-logind[1584]: New session 10 of user core. Jan 29 11:23:46.649658 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:23:46.689737 containerd[1603]: time="2025-01-29T11:23:46.689690897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:46.690423 containerd[1603]: time="2025-01-29T11:23:46.690368971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:23:46.691500 containerd[1603]: time="2025-01-29T11:23:46.691465352Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:46.693489 containerd[1603]: time="2025-01-29T11:23:46.693447878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:46.694026 containerd[1603]: time="2025-01-29T11:23:46.693997811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.895655192s" Jan 29 11:23:46.694069 containerd[1603]: time="2025-01-29T11:23:46.694026645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:23:46.694984 containerd[1603]: time="2025-01-29T11:23:46.694956323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:23:46.696132 containerd[1603]: time="2025-01-29T11:23:46.696081367Z" level=info msg="CreateContainer within sandbox \"bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:23:46.712211 kubelet[2833]: E0129 11:23:46.712177 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:46.712681 kubelet[2833]: E0129 11:23:46.712460 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:46.722375 containerd[1603]: time="2025-01-29T11:23:46.722338994Z" level=info msg="CreateContainer within sandbox \"bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"96e700231ec0623459c5a5736a9a983fd1ee7c2f20c09d5d2763ff3bee9801d3\"" Jan 29 11:23:46.724424 containerd[1603]: time="2025-01-29T11:23:46.724356586Z" level=info msg="StartContainer for \"96e700231ec0623459c5a5736a9a983fd1ee7c2f20c09d5d2763ff3bee9801d3\"" Jan 29 11:23:46.780987 sshd[5644]: Connection closed by 10.0.0.1 port 59500 Jan 29 11:23:46.781914 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:46.784163 containerd[1603]: time="2025-01-29T11:23:46.784120914Z" level=info msg="StartContainer for \"96e700231ec0623459c5a5736a9a983fd1ee7c2f20c09d5d2763ff3bee9801d3\" returns successfully" Jan 29 11:23:46.788178 systemd[1]: sshd@9-10.0.0.145:22-10.0.0.1:59500.service: Deactivated successfully. Jan 29 11:23:46.791095 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:23:46.791097 systemd-logind[1584]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:23:46.792280 systemd-logind[1584]: Removed session 10. Jan 29 11:23:47.716916 kubelet[2833]: E0129 11:23:47.716880 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:47.717360 kubelet[2833]: E0129 11:23:47.717058 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:48.796516 containerd[1603]: time="2025-01-29T11:23:48.796464084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:48.797220 containerd[1603]: time="2025-01-29T11:23:48.797159319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 11:23:48.798124 containerd[1603]: time="2025-01-29T11:23:48.798090400Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:48.800234 containerd[1603]: time="2025-01-29T11:23:48.800204782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:48.800823 containerd[1603]: time="2025-01-29T11:23:48.800801543Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.105812528s" Jan 29 11:23:48.800869 containerd[1603]: time="2025-01-29T11:23:48.800826890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:23:48.801603 containerd[1603]: time="2025-01-29T11:23:48.801582581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:23:48.802669 containerd[1603]: time="2025-01-29T11:23:48.802630359Z" level=info msg="CreateContainer within sandbox \"dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:23:48.817421 containerd[1603]: time="2025-01-29T11:23:48.817383538Z" level=info msg="CreateContainer within sandbox \"dc1577e592f22e4cd5fce971d9ac478f32c96d0ef63154a6f54bc6bf188d52a2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cd002186df84cffb557bb49ef2c59ff6762fa1201610f48e1f8d0f7ef0b113fa\"" Jan 29 11:23:48.817814 containerd[1603]: time="2025-01-29T11:23:48.817789772Z" level=info msg="StartContainer for \"cd002186df84cffb557bb49ef2c59ff6762fa1201610f48e1f8d0f7ef0b113fa\"" Jan 29 11:23:48.884897 containerd[1603]: time="2025-01-29T11:23:48.884857983Z" level=info msg="StartContainer for \"cd002186df84cffb557bb49ef2c59ff6762fa1201610f48e1f8d0f7ef0b113fa\" returns successfully" Jan 29 11:23:49.733889 kubelet[2833]: I0129 11:23:49.733831 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b9f5bc8c-l4mfg" podStartSLOduration=17.73147763 podStartE2EDuration="21.733816367s" podCreationTimestamp="2025-01-29 11:23:28 +0000 UTC" firstStartedPulling="2025-01-29 11:23:44.79910925 +0000 UTC m=+38.604165355" lastFinishedPulling="2025-01-29 11:23:48.801447987 +0000 UTC m=+42.606504092" observedRunningTime="2025-01-29 11:23:49.733663569 +0000 UTC m=+43.538719674" watchObservedRunningTime="2025-01-29 11:23:49.733816367 +0000 UTC m=+43.538872472" Jan 29 11:23:49.854989 kubelet[2833]: I0129 11:23:49.854940 2833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:23:49.855769 kubelet[2833]: E0129 11:23:49.855729 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:50.727391 kubelet[2833]: I0129 11:23:50.727351 2833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:23:50.727914 kubelet[2833]: E0129 11:23:50.727883 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:23:51.336671 kernel: bpftool[5880]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:23:51.422290 containerd[1603]: time="2025-01-29T11:23:51.422239545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:51.423512 containerd[1603]: time="2025-01-29T11:23:51.423475125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:23:51.424772 containerd[1603]: time="2025-01-29T11:23:51.424751784Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:51.427097 containerd[1603]: time="2025-01-29T11:23:51.427069336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:51.427754 containerd[1603]: time="2025-01-29T11:23:51.427710009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.626100729s" Jan 29 11:23:51.427805 containerd[1603]: time="2025-01-29T11:23:51.427752188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:23:51.429069 containerd[1603]: time="2025-01-29T11:23:51.428638904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:23:51.439522 containerd[1603]: time="2025-01-29T11:23:51.439480457Z" level=info msg="CreateContainer within sandbox \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:23:51.458088 containerd[1603]: time="2025-01-29T11:23:51.458038915Z" level=info msg="CreateContainer within sandbox \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\"" Jan 29 11:23:51.458524 containerd[1603]: time="2025-01-29T11:23:51.458501523Z" level=info msg="StartContainer for \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\"" Jan 29 11:23:51.587892 systemd-networkd[1252]: vxlan.calico: Link UP Jan 29 11:23:51.587905 systemd-networkd[1252]: vxlan.calico: Gained carrier Jan 29 11:23:51.608119 containerd[1603]: time="2025-01-29T11:23:51.608079828Z" level=info msg="StartContainer for \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\" returns successfully" Jan 29 11:23:51.785084 kubelet[2833]: I0129 11:23:51.784970 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-695974dcd7-g2c9b" podStartSLOduration=18.243257992 podStartE2EDuration="24.784953189s" podCreationTimestamp="2025-01-29 11:23:27 +0000 UTC" firstStartedPulling="2025-01-29 11:23:44.886775362 +0000 UTC m=+38.691831467" lastFinishedPulling="2025-01-29 11:23:51.428470559 +0000 UTC m=+45.233526664" observedRunningTime="2025-01-29 11:23:51.741744552 +0000 UTC m=+45.546800658" watchObservedRunningTime="2025-01-29 11:23:51.784953189 +0000 UTC m=+45.590009284" Jan 29 11:23:51.792979 systemd[1]: Started sshd@10-10.0.0.145:22-10.0.0.1:59510.service - OpenSSH per-connection server daemon (10.0.0.1:59510). Jan 29 11:23:51.828941 containerd[1603]: time="2025-01-29T11:23:51.828269709Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:51.829944 containerd[1603]: time="2025-01-29T11:23:51.829871827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:23:51.831855 containerd[1603]: time="2025-01-29T11:23:51.831667861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 402.981047ms" Jan 29 11:23:51.831978 containerd[1603]: time="2025-01-29T11:23:51.831961993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:23:51.833390 containerd[1603]: time="2025-01-29T11:23:51.833365138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:23:51.834480 containerd[1603]: time="2025-01-29T11:23:51.834461078Z" level=info msg="CreateContainer within sandbox \"ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:23:51.848248 sshd[5983]: Accepted publickey for core from 10.0.0.1 port 59510 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:23:51.850054 containerd[1603]: time="2025-01-29T11:23:51.850018037Z" level=info msg="CreateContainer within sandbox \"ac698f8bbd130c1f84d9342777c0cf6d2913069ab22859e0983944bc2393cd4c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"44d423e89b9c6c4079e7192663a6bd0f9c98cd836164d021c5f49f1fe4f52e43\"" Jan 29 11:23:51.850523 sshd-session[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:51.851251 containerd[1603]: time="2025-01-29T11:23:51.850834881Z" level=info msg="StartContainer for \"44d423e89b9c6c4079e7192663a6bd0f9c98cd836164d021c5f49f1fe4f52e43\"" Jan 29 11:23:51.857113 systemd-logind[1584]: New session 11 of user core. Jan 29 11:23:51.862179 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:23:51.933808 containerd[1603]: time="2025-01-29T11:23:51.933752060Z" level=info msg="StartContainer for \"44d423e89b9c6c4079e7192663a6bd0f9c98cd836164d021c5f49f1fe4f52e43\" returns successfully" Jan 29 11:23:52.010625 sshd[6023]: Connection closed by 10.0.0.1 port 59510 Jan 29 11:23:52.011805 sshd-session[5983]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:52.019883 systemd[1]: Started sshd@11-10.0.0.145:22-10.0.0.1:59514.service - OpenSSH per-connection server daemon (10.0.0.1:59514). Jan 29 11:23:52.021026 systemd[1]: sshd@10-10.0.0.145:22-10.0.0.1:59510.service: Deactivated successfully. Jan 29 11:23:52.025747 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:23:52.026039 systemd-logind[1584]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:23:52.027520 systemd-logind[1584]: Removed session 11. Jan 29 11:23:52.064619 sshd[6070]: Accepted publickey for core from 10.0.0.1 port 59514 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:23:52.066403 sshd-session[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:52.071056 systemd-logind[1584]: New session 12 of user core. Jan 29 11:23:52.080911 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:23:52.265729 sshd[6076]: Connection closed by 10.0.0.1 port 59514 Jan 29 11:23:52.265230 sshd-session[6070]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:52.277618 systemd[1]: Started sshd@12-10.0.0.145:22-10.0.0.1:59528.service - OpenSSH per-connection server daemon (10.0.0.1:59528). Jan 29 11:23:52.281721 systemd[1]: sshd@11-10.0.0.145:22-10.0.0.1:59514.service: Deactivated successfully. Jan 29 11:23:52.292630 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:23:52.294893 systemd-logind[1584]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:23:52.297115 systemd-logind[1584]: Removed session 12. Jan 29 11:23:52.372175 sshd[6084]: Accepted publickey for core from 10.0.0.1 port 59528 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:23:52.374075 sshd-session[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:52.379570 systemd-logind[1584]: New session 13 of user core. Jan 29 11:23:52.385203 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:23:52.519283 sshd[6090]: Connection closed by 10.0.0.1 port 59528 Jan 29 11:23:52.520366 sshd-session[6084]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:52.525318 systemd[1]: sshd@12-10.0.0.145:22-10.0.0.1:59528.service: Deactivated successfully. Jan 29 11:23:52.528056 systemd-logind[1584]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:23:52.528149 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:23:52.529384 systemd-logind[1584]: Removed session 13. Jan 29 11:23:52.745906 kubelet[2833]: I0129 11:23:52.745837 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b9f5bc8c-kfgmw" podStartSLOduration=17.801048278 podStartE2EDuration="24.745820197s" podCreationTimestamp="2025-01-29 11:23:28 +0000 UTC" firstStartedPulling="2025-01-29 11:23:44.888118919 +0000 UTC m=+38.693175024" lastFinishedPulling="2025-01-29 11:23:51.832890838 +0000 UTC m=+45.637946943" observedRunningTime="2025-01-29 11:23:52.745519111 +0000 UTC m=+46.550575216" watchObservedRunningTime="2025-01-29 11:23:52.745820197 +0000 UTC m=+46.550876302" Jan 29 11:23:53.314788 systemd-networkd[1252]: vxlan.calico: Gained IPv6LL Jan 29 11:23:53.360591 containerd[1603]: time="2025-01-29T11:23:53.360539748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:53.361344 containerd[1603]: time="2025-01-29T11:23:53.361305386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:23:53.362336 containerd[1603]: time="2025-01-29T11:23:53.362297629Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:53.364337 containerd[1603]: time="2025-01-29T11:23:53.364308084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:53.364922 containerd[1603]: time="2025-01-29T11:23:53.364891019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.531409862s" Jan 29 11:23:53.364957 containerd[1603]: time="2025-01-29T11:23:53.364920975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:23:53.366931 containerd[1603]: time="2025-01-29T11:23:53.366906444Z" level=info msg="CreateContainer within sandbox \"bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:23:53.381488 containerd[1603]: time="2025-01-29T11:23:53.381452129Z" level=info msg="CreateContainer within sandbox \"bf6427c1162de6af8f7606c9985ad98e019c5cbb3b5731ae84eea8957208735f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"75e748332e1b61d1a87d14196550f8e897e0726e30186f07607e77251e4a0b35\"" Jan 29 11:23:53.381913 containerd[1603]: time="2025-01-29T11:23:53.381882567Z" level=info msg="StartContainer for \"75e748332e1b61d1a87d14196550f8e897e0726e30186f07607e77251e4a0b35\"" Jan 29 11:23:53.440342 containerd[1603]: time="2025-01-29T11:23:53.440303104Z" level=info msg="StartContainer for \"75e748332e1b61d1a87d14196550f8e897e0726e30186f07607e77251e4a0b35\" returns successfully" Jan 29 11:23:53.741600 kubelet[2833]: I0129 11:23:53.741548 2833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:23:53.805939 kubelet[2833]: I0129 11:23:53.805867 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gnqjx" podStartSLOduration=18.238135973 podStartE2EDuration="26.805851095s" podCreationTimestamp="2025-01-29 11:23:27 +0000 UTC" firstStartedPulling="2025-01-29 11:23:44.79791242 +0000 UTC m=+38.602968525" lastFinishedPulling="2025-01-29 11:23:53.365627542 +0000 UTC m=+47.170683647" observedRunningTime="2025-01-29 11:23:53.804081763 +0000 UTC m=+47.609137888" watchObservedRunningTime="2025-01-29 11:23:53.805851095 +0000 UTC m=+47.610907200" Jan 29 11:23:54.353319 kubelet[2833]: I0129 11:23:54.353282 2833 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:23:54.353319 kubelet[2833]: I0129 11:23:54.353317 2833 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:23:57.532117 systemd[1]: Started sshd@13-10.0.0.145:22-10.0.0.1:44994.service - OpenSSH per-connection server daemon (10.0.0.1:44994). Jan 29 11:23:57.576377 sshd[6160]: Accepted publickey for core from 10.0.0.1 port 44994 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:23:57.578261 sshd-session[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:57.582323 systemd-logind[1584]: New session 14 of user core. Jan 29 11:23:57.589920 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:23:57.711059 sshd[6163]: Connection closed by 10.0.0.1 port 44994 Jan 29 11:23:57.711408 sshd-session[6160]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:57.717911 systemd[1]: sshd@13-10.0.0.145:22-10.0.0.1:44994.service: Deactivated successfully. Jan 29 11:23:57.720363 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:23:57.720422 systemd-logind[1584]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:23:57.722080 systemd-logind[1584]: Removed session 14. Jan 29 11:24:02.032364 kubelet[2833]: E0129 11:24:02.032073 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:02.387617 kubelet[2833]: I0129 11:24:02.387564 2833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:24:02.725855 systemd[1]: Started sshd@14-10.0.0.145:22-10.0.0.1:44998.service - OpenSSH per-connection server daemon (10.0.0.1:44998). Jan 29 11:24:02.770062 sshd[6209]: Accepted publickey for core from 10.0.0.1 port 44998 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:02.771741 sshd-session[6209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:02.776029 systemd-logind[1584]: New session 15 of user core. Jan 29 11:24:02.785992 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:24:02.897549 sshd[6212]: Connection closed by 10.0.0.1 port 44998 Jan 29 11:24:02.898306 sshd-session[6209]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:02.906316 systemd[1]: Started sshd@15-10.0.0.145:22-10.0.0.1:45006.service - OpenSSH per-connection server daemon (10.0.0.1:45006). Jan 29 11:24:02.907032 systemd[1]: sshd@14-10.0.0.145:22-10.0.0.1:44998.service: Deactivated successfully. Jan 29 11:24:02.909982 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:24:02.912283 systemd-logind[1584]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:24:02.914098 systemd-logind[1584]: Removed session 15. Jan 29 11:24:02.946239 sshd[6222]: Accepted publickey for core from 10.0.0.1 port 45006 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:02.949201 sshd-session[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:02.959706 systemd-logind[1584]: New session 16 of user core. Jan 29 11:24:02.966183 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:24:02.986772 containerd[1603]: time="2025-01-29T11:24:02.986650892Z" level=info msg="StopContainer for \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\" with timeout 300 (s)" Jan 29 11:24:02.989748 containerd[1603]: time="2025-01-29T11:24:02.989724991Z" level=info msg="Stop container \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\" with signal terminated" Jan 29 11:24:03.147408 containerd[1603]: time="2025-01-29T11:24:03.146664179Z" level=info msg="StopContainer for \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\" with timeout 30 (s)" Jan 29 11:24:03.149137 containerd[1603]: time="2025-01-29T11:24:03.149024879Z" level=info msg="Stop container \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\" with signal terminated" Jan 29 11:24:03.149212 containerd[1603]: time="2025-01-29T11:24:03.149177996Z" level=info msg="StopContainer for \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\" with timeout 5 (s)" Jan 29 11:24:03.149558 containerd[1603]: time="2025-01-29T11:24:03.149510620Z" level=info msg="Stop container \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\" with signal terminated" Jan 29 11:24:03.201110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723-rootfs.mount: Deactivated successfully. Jan 29 11:24:03.216782 containerd[1603]: time="2025-01-29T11:24:03.216715352Z" level=info msg="shim disconnected" id=b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a namespace=k8s.io Jan 29 11:24:03.216782 containerd[1603]: time="2025-01-29T11:24:03.216783821Z" level=warning msg="cleaning up after shim disconnected" id=b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a namespace=k8s.io Jan 29 11:24:03.217094 containerd[1603]: time="2025-01-29T11:24:03.216794000Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:03.217730 containerd[1603]: time="2025-01-29T11:24:03.217231491Z" level=info msg="shim disconnected" id=b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723 namespace=k8s.io Jan 29 11:24:03.217730 containerd[1603]: time="2025-01-29T11:24:03.217271166Z" level=warning msg="cleaning up after shim disconnected" id=b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723 namespace=k8s.io Jan 29 11:24:03.217730 containerd[1603]: time="2025-01-29T11:24:03.217287977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:03.220119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a-rootfs.mount: Deactivated successfully. Jan 29 11:24:03.265785 containerd[1603]: time="2025-01-29T11:24:03.265629018Z" level=info msg="StopContainer for \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\" returns successfully" Jan 29 11:24:03.266443 containerd[1603]: time="2025-01-29T11:24:03.266407829Z" level=info msg="StopPodSandbox for \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\"" Jan 29 11:24:03.273114 containerd[1603]: time="2025-01-29T11:24:03.273048612Z" level=info msg="Container to stop \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:24:03.273693 containerd[1603]: time="2025-01-29T11:24:03.273385344Z" level=info msg="StopContainer for \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\" returns successfully" Jan 29 11:24:03.273925 containerd[1603]: time="2025-01-29T11:24:03.273902745Z" level=info msg="StopPodSandbox for \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\"" Jan 29 11:24:03.273960 containerd[1603]: time="2025-01-29T11:24:03.273927371Z" level=info msg="Container to stop \"ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:24:03.273988 containerd[1603]: time="2025-01-29T11:24:03.273959321Z" level=info msg="Container to stop \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:24:03.273988 containerd[1603]: time="2025-01-29T11:24:03.273968418Z" level=info msg="Container to stop \"f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:24:03.278785 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74-shm.mount: Deactivated successfully. Jan 29 11:24:03.279041 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece-shm.mount: Deactivated successfully. Jan 29 11:24:03.308213 containerd[1603]: time="2025-01-29T11:24:03.307264615Z" level=info msg="shim disconnected" id=01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece namespace=k8s.io Jan 29 11:24:03.308213 containerd[1603]: time="2025-01-29T11:24:03.308052344Z" level=warning msg="cleaning up after shim disconnected" id=01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece namespace=k8s.io Jan 29 11:24:03.308213 containerd[1603]: time="2025-01-29T11:24:03.308065278Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:03.315980 containerd[1603]: time="2025-01-29T11:24:03.315912524Z" level=info msg="shim disconnected" id=a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74 namespace=k8s.io Jan 29 11:24:03.315980 containerd[1603]: time="2025-01-29T11:24:03.315971325Z" level=warning msg="cleaning up after shim disconnected" id=a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74 namespace=k8s.io Jan 29 11:24:03.315980 containerd[1603]: time="2025-01-29T11:24:03.315980913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:03.336703 containerd[1603]: time="2025-01-29T11:24:03.336623833Z" level=info msg="TearDown network for sandbox \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\" successfully" Jan 29 11:24:03.336703 containerd[1603]: time="2025-01-29T11:24:03.336676642Z" level=info msg="StopPodSandbox for \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\" returns successfully" Jan 29 11:24:03.381730 kubelet[2833]: I0129 11:24:03.379227 2833 topology_manager.go:215] "Topology Admit Handler" podUID="73bdfca7-f97e-40cb-a10e-3c4e36825c7a" podNamespace="calico-system" podName="calico-node-6lnzf" Jan 29 11:24:03.382609 kubelet[2833]: E0129 11:24:03.382117 2833 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a604ab62-8067-431a-883d-8827e924e33c" containerName="flexvol-driver" Jan 29 11:24:03.382609 kubelet[2833]: E0129 11:24:03.382138 2833 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a604ab62-8067-431a-883d-8827e924e33c" containerName="install-cni" Jan 29 11:24:03.382609 kubelet[2833]: E0129 11:24:03.382144 2833 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a604ab62-8067-431a-883d-8827e924e33c" containerName="calico-node" Jan 29 11:24:03.382609 kubelet[2833]: I0129 11:24:03.382174 2833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a604ab62-8067-431a-883d-8827e924e33c" containerName="calico-node" Jan 29 11:24:03.407873 sshd[6227]: Connection closed by 10.0.0.1 port 45006 Jan 29 11:24:03.409096 sshd-session[6222]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:03.417305 systemd[1]: Started sshd@16-10.0.0.145:22-10.0.0.1:45010.service - OpenSSH per-connection server daemon (10.0.0.1:45010). Jan 29 11:24:03.418605 systemd[1]: sshd@15-10.0.0.145:22-10.0.0.1:45006.service: Deactivated successfully. Jan 29 11:24:03.420970 systemd-networkd[1252]: cali00f6c6bd503: Link DOWN Jan 29 11:24:03.420974 systemd-networkd[1252]: cali00f6c6bd503: Lost carrier Jan 29 11:24:03.427120 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:24:03.427989 systemd-logind[1584]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:24:03.429379 systemd-logind[1584]: Removed session 16. Jan 29 11:24:03.466761 sshd[6413]: Accepted publickey for core from 10.0.0.1 port 45010 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:03.468339 sshd-session[6413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:03.472171 kubelet[2833]: I0129 11:24:03.472144 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a604ab62-8067-431a-883d-8827e924e33c-node-certs\") pod \"a604ab62-8067-431a-883d-8827e924e33c\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " Jan 29 11:24:03.472251 kubelet[2833]: I0129 11:24:03.472190 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-lib-modules\") pod \"a604ab62-8067-431a-883d-8827e924e33c\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " Jan 29 11:24:03.472251 kubelet[2833]: I0129 11:24:03.472206 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-cni-log-dir\") pod \"a604ab62-8067-431a-883d-8827e924e33c\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " Jan 29 11:24:03.472251 kubelet[2833]: I0129 11:24:03.472220 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-var-lib-calico\") pod \"a604ab62-8067-431a-883d-8827e924e33c\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " Jan 29 11:24:03.472251 kubelet[2833]: I0129 11:24:03.472237 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-var-run-calico\") pod \"a604ab62-8067-431a-883d-8827e924e33c\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " Jan 29 11:24:03.472406 kubelet[2833]: I0129 11:24:03.472255 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-policysync\") pod \"a604ab62-8067-431a-883d-8827e924e33c\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " Jan 29 11:24:03.472406 kubelet[2833]: I0129 11:24:03.472270 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-cni-bin-dir\") pod \"a604ab62-8067-431a-883d-8827e924e33c\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " Jan 29 11:24:03.472406 kubelet[2833]: I0129 11:24:03.472292 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-xtables-lock\") pod \"a604ab62-8067-431a-883d-8827e924e33c\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " Jan 29 11:24:03.472406 kubelet[2833]: I0129 11:24:03.472306 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-cni-net-dir\") pod \"a604ab62-8067-431a-883d-8827e924e33c\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " Jan 29 11:24:03.472406 kubelet[2833]: I0129 11:24:03.472324 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a604ab62-8067-431a-883d-8827e924e33c-tigera-ca-bundle\") pod \"a604ab62-8067-431a-883d-8827e924e33c\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " Jan 29 11:24:03.472406 kubelet[2833]: I0129 11:24:03.472344 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhz4v\" (UniqueName: \"kubernetes.io/projected/a604ab62-8067-431a-883d-8827e924e33c-kube-api-access-dhz4v\") pod \"a604ab62-8067-431a-883d-8827e924e33c\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " Jan 29 11:24:03.472586 kubelet[2833]: I0129 11:24:03.472360 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-flexvol-driver-host\") pod \"a604ab62-8067-431a-883d-8827e924e33c\" (UID: \"a604ab62-8067-431a-883d-8827e924e33c\") " Jan 29 11:24:03.472586 kubelet[2833]: I0129 11:24:03.472433 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "a604ab62-8067-431a-883d-8827e924e33c" (UID: "a604ab62-8067-431a-883d-8827e924e33c"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:24:03.472786 kubelet[2833]: I0129 11:24:03.472769 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "a604ab62-8067-431a-883d-8827e924e33c" (UID: "a604ab62-8067-431a-883d-8827e924e33c"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:24:03.473052 kubelet[2833]: I0129 11:24:03.472786 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-policysync" (OuterVolumeSpecName: "policysync") pod "a604ab62-8067-431a-883d-8827e924e33c" (UID: "a604ab62-8067-431a-883d-8827e924e33c"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:24:03.473125 kubelet[2833]: I0129 11:24:03.472818 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a604ab62-8067-431a-883d-8827e924e33c" (UID: "a604ab62-8067-431a-883d-8827e924e33c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:24:03.473208 systemd-logind[1584]: New session 17 of user core. Jan 29 11:24:03.474773 kubelet[2833]: I0129 11:24:03.472840 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "a604ab62-8067-431a-883d-8827e924e33c" (UID: "a604ab62-8067-431a-883d-8827e924e33c"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:24:03.474773 kubelet[2833]: I0129 11:24:03.472839 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "a604ab62-8067-431a-883d-8827e924e33c" (UID: "a604ab62-8067-431a-883d-8827e924e33c"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:24:03.474773 kubelet[2833]: I0129 11:24:03.472853 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "a604ab62-8067-431a-883d-8827e924e33c" (UID: "a604ab62-8067-431a-883d-8827e924e33c"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:24:03.474773 kubelet[2833]: I0129 11:24:03.472862 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a604ab62-8067-431a-883d-8827e924e33c" (UID: "a604ab62-8067-431a-883d-8827e924e33c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:24:03.474773 kubelet[2833]: I0129 11:24:03.472870 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "a604ab62-8067-431a-883d-8827e924e33c" (UID: "a604ab62-8067-431a-883d-8827e924e33c"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:24:03.477026 kubelet[2833]: I0129 11:24:03.476959 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a604ab62-8067-431a-883d-8827e924e33c-kube-api-access-dhz4v" (OuterVolumeSpecName: "kube-api-access-dhz4v") pod "a604ab62-8067-431a-883d-8827e924e33c" (UID: "a604ab62-8067-431a-883d-8827e924e33c"). InnerVolumeSpecName "kube-api-access-dhz4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:24:03.477200 kubelet[2833]: I0129 11:24:03.477173 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a604ab62-8067-431a-883d-8827e924e33c-node-certs" (OuterVolumeSpecName: "node-certs") pod "a604ab62-8067-431a-883d-8827e924e33c" (UID: "a604ab62-8067-431a-883d-8827e924e33c"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:24:03.477999 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:24:03.478818 kubelet[2833]: I0129 11:24:03.478785 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a604ab62-8067-431a-883d-8827e924e33c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "a604ab62-8067-431a-883d-8827e924e33c" (UID: "a604ab62-8067-431a-883d-8827e924e33c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.419 [INFO][6406] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.419 [INFO][6406] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" iface="eth0" netns="/var/run/netns/cni-e7f1f84b-93c5-26cc-7b8b-7361ce1363b1" Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.420 [INFO][6406] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" iface="eth0" netns="/var/run/netns/cni-e7f1f84b-93c5-26cc-7b8b-7361ce1363b1" Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.428 [INFO][6406] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" after=8.878502ms iface="eth0" netns="/var/run/netns/cni-e7f1f84b-93c5-26cc-7b8b-7361ce1363b1" Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.428 [INFO][6406] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.428 [INFO][6406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.449 [INFO][6420] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" HandleID="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Workload="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.449 [INFO][6420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.449 [INFO][6420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.475 [INFO][6420] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" HandleID="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Workload="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.475 [INFO][6420] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" HandleID="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Workload="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.477 [INFO][6420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:24:03.483573 containerd[1603]: 2025-01-29 11:24:03.479 [INFO][6406] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Jan 29 11:24:03.483986 containerd[1603]: time="2025-01-29T11:24:03.483806088Z" level=info msg="TearDown network for sandbox \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\" successfully" Jan 29 11:24:03.483986 containerd[1603]: time="2025-01-29T11:24:03.483829913Z" level=info msg="StopPodSandbox for \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\" returns successfully" Jan 29 11:24:03.484288 containerd[1603]: time="2025-01-29T11:24:03.484239752Z" level=info msg="StopPodSandbox for \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\"" Jan 29 11:24:03.484394 containerd[1603]: time="2025-01-29T11:24:03.484371359Z" level=info msg="TearDown network for sandbox \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\" successfully" Jan 29 11:24:03.484394 containerd[1603]: time="2025-01-29T11:24:03.484386888Z" level=info msg="StopPodSandbox for \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\" returns successfully" Jan 29 11:24:03.484659 containerd[1603]: time="2025-01-29T11:24:03.484616028Z" level=info msg="StopPodSandbox for \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\"" Jan 29 11:24:03.484740 containerd[1603]: time="2025-01-29T11:24:03.484726886Z" level=info msg="TearDown network for sandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\" successfully" Jan 29 11:24:03.484763 containerd[1603]: time="2025-01-29T11:24:03.484739129Z" level=info msg="StopPodSandbox for \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\" returns successfully" Jan 29 11:24:03.484949 containerd[1603]: time="2025-01-29T11:24:03.484926641Z" level=info msg="StopPodSandbox for \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\"" Jan 29 11:24:03.485030 containerd[1603]: time="2025-01-29T11:24:03.485015568Z" level=info msg="TearDown network for sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\" successfully" Jan 29 11:24:03.485062 containerd[1603]: time="2025-01-29T11:24:03.485028803Z" level=info msg="StopPodSandbox for \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\" returns successfully" Jan 29 11:24:03.485242 containerd[1603]: time="2025-01-29T11:24:03.485223939Z" level=info msg="StopPodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\"" Jan 29 11:24:03.485323 containerd[1603]: time="2025-01-29T11:24:03.485307346Z" level=info msg="TearDown network for sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" successfully" Jan 29 11:24:03.485323 containerd[1603]: time="2025-01-29T11:24:03.485320771Z" level=info msg="StopPodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" returns successfully" Jan 29 11:24:03.485556 containerd[1603]: time="2025-01-29T11:24:03.485537357Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\"" Jan 29 11:24:03.485703 containerd[1603]: time="2025-01-29T11:24:03.485686648Z" level=info msg="TearDown network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" successfully" Jan 29 11:24:03.485734 containerd[1603]: time="2025-01-29T11:24:03.485702237Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" returns successfully" Jan 29 11:24:03.486011 containerd[1603]: time="2025-01-29T11:24:03.485991259Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\"" Jan 29 11:24:03.486085 containerd[1603]: time="2025-01-29T11:24:03.486071079Z" level=info msg="TearDown network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" successfully" Jan 29 11:24:03.486113 containerd[1603]: time="2025-01-29T11:24:03.486084113Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" returns successfully" Jan 29 11:24:03.573352 kubelet[2833]: I0129 11:24:03.573306 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/73bdfca7-f97e-40cb-a10e-3c4e36825c7a-cni-log-dir\") pod \"calico-node-6lnzf\" (UID: \"73bdfca7-f97e-40cb-a10e-3c4e36825c7a\") " pod="calico-system/calico-node-6lnzf" Jan 29 11:24:03.573352 kubelet[2833]: I0129 11:24:03.573350 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/73bdfca7-f97e-40cb-a10e-3c4e36825c7a-cni-bin-dir\") pod \"calico-node-6lnzf\" (UID: \"73bdfca7-f97e-40cb-a10e-3c4e36825c7a\") " pod="calico-system/calico-node-6lnzf" Jan 29 11:24:03.573502 kubelet[2833]: I0129 11:24:03.573369 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73bdfca7-f97e-40cb-a10e-3c4e36825c7a-lib-modules\") pod \"calico-node-6lnzf\" (UID: \"73bdfca7-f97e-40cb-a10e-3c4e36825c7a\") " pod="calico-system/calico-node-6lnzf" Jan 29 11:24:03.573502 kubelet[2833]: I0129 11:24:03.573384 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/73bdfca7-f97e-40cb-a10e-3c4e36825c7a-var-lib-calico\") pod \"calico-node-6lnzf\" (UID: \"73bdfca7-f97e-40cb-a10e-3c4e36825c7a\") " pod="calico-system/calico-node-6lnzf" Jan 29 11:24:03.573502 kubelet[2833]: I0129 11:24:03.573401 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/73bdfca7-f97e-40cb-a10e-3c4e36825c7a-cni-net-dir\") pod \"calico-node-6lnzf\" (UID: \"73bdfca7-f97e-40cb-a10e-3c4e36825c7a\") " pod="calico-system/calico-node-6lnzf" Jan 29 11:24:03.573502 kubelet[2833]: I0129 11:24:03.573418 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/73bdfca7-f97e-40cb-a10e-3c4e36825c7a-node-certs\") pod \"calico-node-6lnzf\" (UID: \"73bdfca7-f97e-40cb-a10e-3c4e36825c7a\") " pod="calico-system/calico-node-6lnzf" Jan 29 11:24:03.573502 kubelet[2833]: I0129 11:24:03.573454 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/73bdfca7-f97e-40cb-a10e-3c4e36825c7a-var-run-calico\") pod \"calico-node-6lnzf\" (UID: \"73bdfca7-f97e-40cb-a10e-3c4e36825c7a\") " pod="calico-system/calico-node-6lnzf" Jan 29 11:24:03.573616 kubelet[2833]: I0129 11:24:03.573470 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntsn2\" (UniqueName: \"kubernetes.io/projected/73bdfca7-f97e-40cb-a10e-3c4e36825c7a-kube-api-access-ntsn2\") pod \"calico-node-6lnzf\" (UID: \"73bdfca7-f97e-40cb-a10e-3c4e36825c7a\") " pod="calico-system/calico-node-6lnzf" Jan 29 11:24:03.573616 kubelet[2833]: I0129 11:24:03.573506 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73bdfca7-f97e-40cb-a10e-3c4e36825c7a-xtables-lock\") pod \"calico-node-6lnzf\" (UID: \"73bdfca7-f97e-40cb-a10e-3c4e36825c7a\") " pod="calico-system/calico-node-6lnzf" Jan 29 11:24:03.573616 kubelet[2833]: I0129 11:24:03.573521 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/73bdfca7-f97e-40cb-a10e-3c4e36825c7a-flexvol-driver-host\") pod \"calico-node-6lnzf\" (UID: \"73bdfca7-f97e-40cb-a10e-3c4e36825c7a\") " pod="calico-system/calico-node-6lnzf" Jan 29 11:24:03.573616 kubelet[2833]: I0129 11:24:03.573534 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/73bdfca7-f97e-40cb-a10e-3c4e36825c7a-policysync\") pod \"calico-node-6lnzf\" (UID: \"73bdfca7-f97e-40cb-a10e-3c4e36825c7a\") " pod="calico-system/calico-node-6lnzf" Jan 29 11:24:03.573616 kubelet[2833]: I0129 11:24:03.573550 2833 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73bdfca7-f97e-40cb-a10e-3c4e36825c7a-tigera-ca-bundle\") pod \"calico-node-6lnzf\" (UID: \"73bdfca7-f97e-40cb-a10e-3c4e36825c7a\") " pod="calico-system/calico-node-6lnzf" Jan 29 11:24:03.573616 kubelet[2833]: I0129 11:24:03.573573 2833 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dhz4v\" (UniqueName: \"kubernetes.io/projected/a604ab62-8067-431a-883d-8827e924e33c-kube-api-access-dhz4v\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.573784 kubelet[2833]: I0129 11:24:03.573582 2833 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.573784 kubelet[2833]: I0129 11:24:03.573592 2833 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a604ab62-8067-431a-883d-8827e924e33c-node-certs\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.573784 kubelet[2833]: I0129 11:24:03.573600 2833 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.573784 kubelet[2833]: I0129 11:24:03.573608 2833 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.573784 kubelet[2833]: I0129 11:24:03.573628 2833 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.573784 kubelet[2833]: I0129 11:24:03.573636 2833 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.573784 kubelet[2833]: I0129 11:24:03.573658 2833 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-policysync\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.573784 kubelet[2833]: I0129 11:24:03.573666 2833 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.573950 kubelet[2833]: I0129 11:24:03.573673 2833 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.573950 kubelet[2833]: I0129 11:24:03.573681 2833 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a604ab62-8067-431a-883d-8827e924e33c-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.573950 kubelet[2833]: I0129 11:24:03.573689 2833 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a604ab62-8067-431a-883d-8827e924e33c-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.674708 kubelet[2833]: I0129 11:24:03.674666 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/530f2b50-c66a-4ebc-869f-eeb1d00efe6c-tigera-ca-bundle\") pod \"530f2b50-c66a-4ebc-869f-eeb1d00efe6c\" (UID: \"530f2b50-c66a-4ebc-869f-eeb1d00efe6c\") " Jan 29 11:24:03.674708 kubelet[2833]: I0129 11:24:03.674716 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqqdz\" (UniqueName: \"kubernetes.io/projected/530f2b50-c66a-4ebc-869f-eeb1d00efe6c-kube-api-access-cqqdz\") pod \"530f2b50-c66a-4ebc-869f-eeb1d00efe6c\" (UID: \"530f2b50-c66a-4ebc-869f-eeb1d00efe6c\") " Jan 29 11:24:03.678520 kubelet[2833]: I0129 11:24:03.678490 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530f2b50-c66a-4ebc-869f-eeb1d00efe6c-kube-api-access-cqqdz" (OuterVolumeSpecName: "kube-api-access-cqqdz") pod "530f2b50-c66a-4ebc-869f-eeb1d00efe6c" (UID: "530f2b50-c66a-4ebc-869f-eeb1d00efe6c"). InnerVolumeSpecName "kube-api-access-cqqdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:24:03.679136 kubelet[2833]: I0129 11:24:03.679108 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/530f2b50-c66a-4ebc-869f-eeb1d00efe6c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "530f2b50-c66a-4ebc-869f-eeb1d00efe6c" (UID: "530f2b50-c66a-4ebc-869f-eeb1d00efe6c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:24:03.768884 kubelet[2833]: I0129 11:24:03.768709 2833 scope.go:117] "RemoveContainer" containerID="b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723" Jan 29 11:24:03.775599 kubelet[2833]: I0129 11:24:03.775474 2833 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cqqdz\" (UniqueName: \"kubernetes.io/projected/530f2b50-c66a-4ebc-869f-eeb1d00efe6c-kube-api-access-cqqdz\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.775599 kubelet[2833]: I0129 11:24:03.775497 2833 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/530f2b50-c66a-4ebc-869f-eeb1d00efe6c-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:03.780929 containerd[1603]: time="2025-01-29T11:24:03.778324678Z" level=info msg="RemoveContainer for \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\"" Jan 29 11:24:03.810464 containerd[1603]: time="2025-01-29T11:24:03.810417166Z" level=info msg="RemoveContainer for \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\" returns successfully" Jan 29 11:24:03.830392 kubelet[2833]: I0129 11:24:03.830146 2833 scope.go:117] "RemoveContainer" containerID="b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723" Jan 29 11:24:03.831987 containerd[1603]: time="2025-01-29T11:24:03.831691340Z" level=error msg="ContainerStatus for \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\": not found" Jan 29 11:24:03.832067 kubelet[2833]: E0129 11:24:03.831888 2833 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\": not found" containerID="b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723" Jan 29 11:24:03.832067 kubelet[2833]: I0129 11:24:03.831923 2833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723"} err="failed to get container status \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\": rpc error: code = NotFound desc = an error occurred when try to find container \"b77625a04e81dbb9e17e7cd0ebe75a27838f2a5f7f00570feeae43d558781723\": not found" Jan 29 11:24:03.832067 kubelet[2833]: I0129 11:24:03.832022 2833 scope.go:117] "RemoveContainer" containerID="b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a" Jan 29 11:24:03.835160 containerd[1603]: time="2025-01-29T11:24:03.835081452Z" level=info msg="RemoveContainer for \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\"" Jan 29 11:24:03.840190 containerd[1603]: time="2025-01-29T11:24:03.840152719Z" level=info msg="RemoveContainer for \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\" returns successfully" Jan 29 11:24:03.840516 kubelet[2833]: I0129 11:24:03.840398 2833 scope.go:117] "RemoveContainer" containerID="f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5" Jan 29 11:24:03.841318 containerd[1603]: time="2025-01-29T11:24:03.841283332Z" level=info msg="RemoveContainer for \"f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5\"" Jan 29 11:24:03.846341 containerd[1603]: time="2025-01-29T11:24:03.846302010Z" level=info msg="RemoveContainer for \"f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5\" returns successfully" Jan 29 11:24:03.846565 kubelet[2833]: I0129 11:24:03.846537 2833 scope.go:117] "RemoveContainer" containerID="ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444" Jan 29 11:24:03.848576 containerd[1603]: time="2025-01-29T11:24:03.848542124Z" level=info msg="RemoveContainer for \"ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444\"" Jan 29 11:24:03.856001 containerd[1603]: time="2025-01-29T11:24:03.855885475Z" level=info msg="RemoveContainer for \"ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444\" returns successfully" Jan 29 11:24:03.856221 kubelet[2833]: I0129 11:24:03.856195 2833 scope.go:117] "RemoveContainer" containerID="b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a" Jan 29 11:24:03.856513 containerd[1603]: time="2025-01-29T11:24:03.856462598Z" level=error msg="ContainerStatus for \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\": not found" Jan 29 11:24:03.856747 kubelet[2833]: E0129 11:24:03.856711 2833 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\": not found" containerID="b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a" Jan 29 11:24:03.856779 kubelet[2833]: I0129 11:24:03.856742 2833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a"} err="failed to get container status \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1d81652361ab111a5df43366240c17f9e8cc92d410959c847982001a4331a6a\": not found" Jan 29 11:24:03.856779 kubelet[2833]: I0129 11:24:03.856769 2833 scope.go:117] "RemoveContainer" containerID="f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5" Jan 29 11:24:03.857757 containerd[1603]: time="2025-01-29T11:24:03.857726911Z" level=error msg="ContainerStatus for \"f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5\": not found" Jan 29 11:24:03.857881 kubelet[2833]: E0129 11:24:03.857853 2833 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5\": not found" containerID="f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5" Jan 29 11:24:03.858231 kubelet[2833]: I0129 11:24:03.858016 2833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5"} err="failed to get container status \"f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8724d004f6fbb0951c569637321a20e6bb7994fce0781f1b6b72c429bc6ada5\": not found" Jan 29 11:24:03.858231 kubelet[2833]: I0129 11:24:03.858041 2833 scope.go:117] "RemoveContainer" containerID="ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444" Jan 29 11:24:03.858369 containerd[1603]: time="2025-01-29T11:24:03.858177927Z" level=error msg="ContainerStatus for \"ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444\": not found" Jan 29 11:24:03.858784 kubelet[2833]: E0129 11:24:03.858687 2833 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444\": not found" containerID="ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444" Jan 29 11:24:03.858784 kubelet[2833]: I0129 11:24:03.858714 2833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444"} err="failed to get container status \"ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab41e2e04de83703ea660fa843e699d8b151d1980ea1590de355ff1a0ac58444\": not found" Jan 29 11:24:03.992778 kubelet[2833]: E0129 11:24:03.992744 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:03.993381 containerd[1603]: time="2025-01-29T11:24:03.993331097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6lnzf,Uid:73bdfca7-f97e-40cb-a10e-3c4e36825c7a,Namespace:calico-system,Attempt:0,}" Jan 29 11:24:04.037770 systemd[1]: var-lib-kubelet-pods-530f2b50\x2dc66a\x2d4ebc\x2d869f\x2deeb1d00efe6c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 29 11:24:04.038280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74-rootfs.mount: Deactivated successfully. Jan 29 11:24:04.038433 systemd[1]: run-netns-cni\x2de7f1f84b\x2d93c5\x2d26cc\x2d7b8b\x2d7361ce1363b1.mount: Deactivated successfully. Jan 29 11:24:04.038573 systemd[1]: var-lib-kubelet-pods-a604ab62\x2d8067\x2d431a\x2d883d\x2d8827e924e33c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 29 11:24:04.038728 systemd[1]: var-lib-kubelet-pods-530f2b50\x2dc66a\x2d4ebc\x2d869f\x2deeb1d00efe6c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcqqdz.mount: Deactivated successfully. Jan 29 11:24:04.038894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece-rootfs.mount: Deactivated successfully. Jan 29 11:24:04.039029 systemd[1]: var-lib-kubelet-pods-a604ab62\x2d8067\x2d431a\x2d883d\x2d8827e924e33c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddhz4v.mount: Deactivated successfully. Jan 29 11:24:04.039168 systemd[1]: var-lib-kubelet-pods-a604ab62\x2d8067\x2d431a\x2d883d\x2d8827e924e33c-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 29 11:24:04.214683 containerd[1603]: time="2025-01-29T11:24:04.214152483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:04.215678 containerd[1603]: time="2025-01-29T11:24:04.214215721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:04.215678 containerd[1603]: time="2025-01-29T11:24:04.214229136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:04.215678 containerd[1603]: time="2025-01-29T11:24:04.214317863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:04.283511 kubelet[2833]: I0129 11:24:04.282857 2833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530f2b50-c66a-4ebc-869f-eeb1d00efe6c" path="/var/lib/kubelet/pods/530f2b50-c66a-4ebc-869f-eeb1d00efe6c/volumes" Jan 29 11:24:04.283511 kubelet[2833]: I0129 11:24:04.283492 2833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a604ab62-8067-431a-883d-8827e924e33c" path="/var/lib/kubelet/pods/a604ab62-8067-431a-883d-8827e924e33c/volumes" Jan 29 11:24:04.335758 containerd[1603]: time="2025-01-29T11:24:04.335611153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6lnzf,Uid:73bdfca7-f97e-40cb-a10e-3c4e36825c7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc1427229bd4c73082584307d9d9dc7fcdad56ae70fc952981e8d1ba4ea9283c\"" Jan 29 11:24:04.340882 kubelet[2833]: E0129 11:24:04.337224 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:04.343325 containerd[1603]: time="2025-01-29T11:24:04.343285845Z" level=info msg="CreateContainer within sandbox \"bc1427229bd4c73082584307d9d9dc7fcdad56ae70fc952981e8d1ba4ea9283c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:24:04.427864 containerd[1603]: time="2025-01-29T11:24:04.427780288Z" level=info msg="CreateContainer within sandbox \"bc1427229bd4c73082584307d9d9dc7fcdad56ae70fc952981e8d1ba4ea9283c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"63b585d372ad27dd72cde1422207ac63beb3700532a30228e3a14e0833004050\"" Jan 29 11:24:04.428759 containerd[1603]: time="2025-01-29T11:24:04.428727936Z" level=info msg="StartContainer for \"63b585d372ad27dd72cde1422207ac63beb3700532a30228e3a14e0833004050\"" Jan 29 11:24:04.518255 containerd[1603]: time="2025-01-29T11:24:04.518118577Z" level=info msg="StartContainer for \"63b585d372ad27dd72cde1422207ac63beb3700532a30228e3a14e0833004050\" returns successfully" Jan 29 11:24:04.635001 containerd[1603]: time="2025-01-29T11:24:04.634907944Z" level=info msg="shim disconnected" id=63b585d372ad27dd72cde1422207ac63beb3700532a30228e3a14e0833004050 namespace=k8s.io Jan 29 11:24:04.635001 containerd[1603]: time="2025-01-29T11:24:04.634960763Z" level=warning msg="cleaning up after shim disconnected" id=63b585d372ad27dd72cde1422207ac63beb3700532a30228e3a14e0833004050 namespace=k8s.io Jan 29 11:24:04.635001 containerd[1603]: time="2025-01-29T11:24:04.634969991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:04.777533 kubelet[2833]: E0129 11:24:04.777179 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:04.780583 containerd[1603]: time="2025-01-29T11:24:04.780417318Z" level=info msg="CreateContainer within sandbox \"bc1427229bd4c73082584307d9d9dc7fcdad56ae70fc952981e8d1ba4ea9283c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:24:04.799441 containerd[1603]: time="2025-01-29T11:24:04.799385594Z" level=info msg="CreateContainer within sandbox \"bc1427229bd4c73082584307d9d9dc7fcdad56ae70fc952981e8d1ba4ea9283c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"25ee22b54c5ad4ba2fe7acc75a802c873482f9aa903043bbd0151b286fa883b1\"" Jan 29 11:24:04.800215 containerd[1603]: time="2025-01-29T11:24:04.800190425Z" level=info msg="StartContainer for \"25ee22b54c5ad4ba2fe7acc75a802c873482f9aa903043bbd0151b286fa883b1\"" Jan 29 11:24:05.007153 containerd[1603]: time="2025-01-29T11:24:05.006930473Z" level=info msg="StartContainer for \"25ee22b54c5ad4ba2fe7acc75a802c873482f9aa903043bbd0151b286fa883b1\" returns successfully" Jan 29 11:24:05.365170 sshd[6434]: Connection closed by 10.0.0.1 port 45010 Jan 29 11:24:05.365724 sshd-session[6413]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:05.375004 systemd[1]: Started sshd@17-10.0.0.145:22-10.0.0.1:57904.service - OpenSSH per-connection server daemon (10.0.0.1:57904). Jan 29 11:24:05.375519 systemd[1]: sshd@16-10.0.0.145:22-10.0.0.1:45010.service: Deactivated successfully. Jan 29 11:24:05.390035 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:24:05.397709 systemd-logind[1584]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:24:05.402128 systemd-logind[1584]: Removed session 17. Jan 29 11:24:05.440187 sshd[6594]: Accepted publickey for core from 10.0.0.1 port 57904 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:05.442297 sshd-session[6594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:05.447843 systemd-logind[1584]: New session 18 of user core. Jan 29 11:24:05.460065 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:24:05.611283 containerd[1603]: time="2025-01-29T11:24:05.611232702Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Jan 29 11:24:05.636443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25ee22b54c5ad4ba2fe7acc75a802c873482f9aa903043bbd0151b286fa883b1-rootfs.mount: Deactivated successfully. Jan 29 11:24:05.788456 kubelet[2833]: E0129 11:24:05.788415 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:06.094613 containerd[1603]: time="2025-01-29T11:24:06.094548981Z" level=info msg="shim disconnected" id=25ee22b54c5ad4ba2fe7acc75a802c873482f9aa903043bbd0151b286fa883b1 namespace=k8s.io Jan 29 11:24:06.094613 containerd[1603]: time="2025-01-29T11:24:06.094603644Z" level=warning msg="cleaning up after shim disconnected" id=25ee22b54c5ad4ba2fe7acc75a802c873482f9aa903043bbd0151b286fa883b1 namespace=k8s.io Jan 29 11:24:06.094613 containerd[1603]: time="2025-01-29T11:24:06.094614133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:06.174401 sshd[6600]: Connection closed by 10.0.0.1 port 57904 Jan 29 11:24:06.176094 sshd-session[6594]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:06.182238 systemd[1]: Started sshd@18-10.0.0.145:22-10.0.0.1:57908.service - OpenSSH per-connection server daemon (10.0.0.1:57908). Jan 29 11:24:06.183016 systemd[1]: sshd@17-10.0.0.145:22-10.0.0.1:57904.service: Deactivated successfully. Jan 29 11:24:06.186716 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:24:06.188004 systemd-logind[1584]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:24:06.189742 systemd-logind[1584]: Removed session 18. Jan 29 11:24:06.227989 sshd[6633]: Accepted publickey for core from 10.0.0.1 port 57908 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:06.229753 sshd-session[6633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:06.234214 systemd-logind[1584]: New session 19 of user core. Jan 29 11:24:06.239941 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:24:06.269771 containerd[1603]: time="2025-01-29T11:24:06.269732214Z" level=info msg="StopPodSandbox for \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\"" Jan 29 11:24:06.269870 containerd[1603]: time="2025-01-29T11:24:06.269850866Z" level=info msg="TearDown network for sandbox \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\" successfully" Jan 29 11:24:06.269908 containerd[1603]: time="2025-01-29T11:24:06.269867748Z" level=info msg="StopPodSandbox for \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\" returns successfully" Jan 29 11:24:06.270211 containerd[1603]: time="2025-01-29T11:24:06.270183330Z" level=info msg="RemovePodSandbox for \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\"" Jan 29 11:24:06.274133 containerd[1603]: time="2025-01-29T11:24:06.274093939Z" level=info msg="Forcibly stopping sandbox \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\"" Jan 29 11:24:06.274299 containerd[1603]: time="2025-01-29T11:24:06.274215677Z" level=info msg="TearDown network for sandbox \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\" successfully" Jan 29 11:24:06.278083 containerd[1603]: time="2025-01-29T11:24:06.278047868Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.278150 containerd[1603]: time="2025-01-29T11:24:06.278097280Z" level=info msg="RemovePodSandbox \"150f2affaeb923f17fd0e14cdeec193b51a08ecc1e8c107d8ff5259713f17344\" returns successfully" Jan 29 11:24:06.278391 containerd[1603]: time="2025-01-29T11:24:06.278358500Z" level=info msg="StopPodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\"" Jan 29 11:24:06.278531 containerd[1603]: time="2025-01-29T11:24:06.278440455Z" level=info msg="TearDown network for sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" successfully" Jan 29 11:24:06.278561 containerd[1603]: time="2025-01-29T11:24:06.278529712Z" level=info msg="StopPodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" returns successfully" Jan 29 11:24:06.278895 containerd[1603]: time="2025-01-29T11:24:06.278851626Z" level=info msg="RemovePodSandbox for \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\"" Jan 29 11:24:06.278895 containerd[1603]: time="2025-01-29T11:24:06.278894276Z" level=info msg="Forcibly stopping sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\"" Jan 29 11:24:06.279020 containerd[1603]: time="2025-01-29T11:24:06.278979816Z" level=info msg="TearDown network for sandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" successfully" Jan 29 11:24:06.282724 containerd[1603]: time="2025-01-29T11:24:06.282676523Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.282724 containerd[1603]: time="2025-01-29T11:24:06.282711999Z" level=info msg="RemovePodSandbox \"a135f76f0e160a59e08d2dcd3dfed3816c2b0eb6a23302cc327681a98221207a\" returns successfully" Jan 29 11:24:06.283019 containerd[1603]: time="2025-01-29T11:24:06.282987627Z" level=info msg="StopPodSandbox for \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\"" Jan 29 11:24:06.283143 containerd[1603]: time="2025-01-29T11:24:06.283086322Z" level=info msg="TearDown network for sandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\" successfully" Jan 29 11:24:06.283143 containerd[1603]: time="2025-01-29T11:24:06.283133591Z" level=info msg="StopPodSandbox for \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\" returns successfully" Jan 29 11:24:06.283381 containerd[1603]: time="2025-01-29T11:24:06.283358803Z" level=info msg="RemovePodSandbox for \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\"" Jan 29 11:24:06.283420 containerd[1603]: time="2025-01-29T11:24:06.283384782Z" level=info msg="Forcibly stopping sandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\"" Jan 29 11:24:06.283556 containerd[1603]: time="2025-01-29T11:24:06.283453090Z" level=info msg="TearDown network for sandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\" successfully" Jan 29 11:24:06.288272 containerd[1603]: time="2025-01-29T11:24:06.287594382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.288272 containerd[1603]: time="2025-01-29T11:24:06.287630099Z" level=info msg="RemovePodSandbox \"fa6edcfd0df12fae250fd234b98ab24186152525435db48f0eb2ab7f3c62e52c\" returns successfully" Jan 29 11:24:06.288904 containerd[1603]: time="2025-01-29T11:24:06.288877770Z" level=info msg="StopPodSandbox for \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\"" Jan 29 11:24:06.368211 containerd[1603]: 2025-01-29 11:24:06.331 [WARNING][6663] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:24:06.368211 containerd[1603]: 2025-01-29 11:24:06.332 [INFO][6663] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Jan 29 11:24:06.368211 containerd[1603]: 2025-01-29 11:24:06.332 [INFO][6663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" iface="eth0" netns="" Jan 29 11:24:06.368211 containerd[1603]: 2025-01-29 11:24:06.332 [INFO][6663] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Jan 29 11:24:06.368211 containerd[1603]: 2025-01-29 11:24:06.332 [INFO][6663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Jan 29 11:24:06.368211 containerd[1603]: 2025-01-29 11:24:06.356 [INFO][6672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" HandleID="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Workload="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:24:06.368211 containerd[1603]: 2025-01-29 11:24:06.357 [INFO][6672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:24:06.368211 containerd[1603]: 2025-01-29 11:24:06.357 [INFO][6672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:24:06.368211 containerd[1603]: 2025-01-29 11:24:06.361 [WARNING][6672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" HandleID="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Workload="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:24:06.368211 containerd[1603]: 2025-01-29 11:24:06.361 [INFO][6672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" HandleID="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Workload="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:24:06.368211 containerd[1603]: 2025-01-29 11:24:06.363 [INFO][6672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:24:06.368211 containerd[1603]: 2025-01-29 11:24:06.365 [INFO][6663] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Jan 29 11:24:06.368211 containerd[1603]: time="2025-01-29T11:24:06.368191909Z" level=info msg="TearDown network for sandbox \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\" successfully" Jan 29 11:24:06.369211 containerd[1603]: time="2025-01-29T11:24:06.368223217Z" level=info msg="StopPodSandbox for \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\" returns successfully" Jan 29 11:24:06.369211 containerd[1603]: time="2025-01-29T11:24:06.368664105Z" level=info msg="RemovePodSandbox for \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\"" Jan 29 11:24:06.369211 containerd[1603]: time="2025-01-29T11:24:06.368685766Z" level=info msg="Forcibly stopping sandbox \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\"" Jan 29 11:24:06.369997 sshd[6639]: Connection closed by 10.0.0.1 port 57908 Jan 29 11:24:06.371012 sshd-session[6633]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:06.374625 systemd[1]: sshd@18-10.0.0.145:22-10.0.0.1:57908.service: Deactivated successfully. Jan 29 11:24:06.375290 systemd-logind[1584]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:24:06.381276 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:24:06.382388 systemd-logind[1584]: Removed session 19. Jan 29 11:24:06.436373 containerd[1603]: 2025-01-29 11:24:06.406 [WARNING][6697] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:24:06.436373 containerd[1603]: 2025-01-29 11:24:06.406 [INFO][6697] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Jan 29 11:24:06.436373 containerd[1603]: 2025-01-29 11:24:06.406 [INFO][6697] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" iface="eth0" netns="" Jan 29 11:24:06.436373 containerd[1603]: 2025-01-29 11:24:06.406 [INFO][6697] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Jan 29 11:24:06.436373 containerd[1603]: 2025-01-29 11:24:06.407 [INFO][6697] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Jan 29 11:24:06.436373 containerd[1603]: 2025-01-29 11:24:06.424 [INFO][6706] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" HandleID="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Workload="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:24:06.436373 containerd[1603]: 2025-01-29 11:24:06.424 [INFO][6706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:24:06.436373 containerd[1603]: 2025-01-29 11:24:06.424 [INFO][6706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:24:06.436373 containerd[1603]: 2025-01-29 11:24:06.430 [WARNING][6706] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" HandleID="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Workload="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:24:06.436373 containerd[1603]: 2025-01-29 11:24:06.430 [INFO][6706] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" HandleID="k8s-pod-network.a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Workload="localhost-k8s-calico--kube--controllers--695974dcd7--g2c9b-eth0" Jan 29 11:24:06.436373 containerd[1603]: 2025-01-29 11:24:06.431 [INFO][6706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:24:06.436373 containerd[1603]: 2025-01-29 11:24:06.434 [INFO][6697] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74" Jan 29 11:24:06.436758 containerd[1603]: time="2025-01-29T11:24:06.436401351Z" level=info msg="TearDown network for sandbox \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\" successfully" Jan 29 11:24:06.444733 containerd[1603]: time="2025-01-29T11:24:06.444690024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.444787 containerd[1603]: time="2025-01-29T11:24:06.444753322Z" level=info msg="RemovePodSandbox \"a4a00139d5554d51c9351bf223173fbace104160d2aa2a280876c6e435ef6e74\" returns successfully" Jan 29 11:24:06.445265 containerd[1603]: time="2025-01-29T11:24:06.445247039Z" level=info msg="StopPodSandbox for \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\"" Jan 29 11:24:06.445359 containerd[1603]: time="2025-01-29T11:24:06.445345053Z" level=info msg="TearDown network for sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\" successfully" Jan 29 11:24:06.445393 containerd[1603]: time="2025-01-29T11:24:06.445358058Z" level=info msg="StopPodSandbox for \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\" returns successfully" Jan 29 11:24:06.445718 containerd[1603]: time="2025-01-29T11:24:06.445697234Z" level=info msg="RemovePodSandbox for \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\"" Jan 29 11:24:06.445757 containerd[1603]: time="2025-01-29T11:24:06.445720477Z" level=info msg="Forcibly stopping sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\"" Jan 29 11:24:06.445827 containerd[1603]: time="2025-01-29T11:24:06.445786311Z" level=info msg="TearDown network for sandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\" successfully" Jan 29 11:24:06.449323 containerd[1603]: time="2025-01-29T11:24:06.449288012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.449323 containerd[1603]: time="2025-01-29T11:24:06.449320422Z" level=info msg="RemovePodSandbox \"b6485bd72b9690bf1d7c88615ec23ae9b8503b8b63d9a2013580389f1de583a3\" returns successfully" Jan 29 11:24:06.449582 containerd[1603]: time="2025-01-29T11:24:06.449547057Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\"" Jan 29 11:24:06.449693 containerd[1603]: time="2025-01-29T11:24:06.449671101Z" level=info msg="TearDown network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" successfully" Jan 29 11:24:06.449693 containerd[1603]: time="2025-01-29T11:24:06.449685177Z" level=info msg="StopPodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" returns successfully" Jan 29 11:24:06.450087 containerd[1603]: time="2025-01-29T11:24:06.450001431Z" level=info msg="RemovePodSandbox for \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\"" Jan 29 11:24:06.450087 containerd[1603]: time="2025-01-29T11:24:06.450024093Z" level=info msg="Forcibly stopping sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\"" Jan 29 11:24:06.450154 containerd[1603]: time="2025-01-29T11:24:06.450097741Z" level=info msg="TearDown network for sandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" successfully" Jan 29 11:24:06.453530 containerd[1603]: time="2025-01-29T11:24:06.453475850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.453671 containerd[1603]: time="2025-01-29T11:24:06.453551562Z" level=info msg="RemovePodSandbox \"338bff58e3718e97f4990cdee8de8dde4f659d7274300bcc56d8e4a2495653cb\" returns successfully" Jan 29 11:24:06.453891 containerd[1603]: time="2025-01-29T11:24:06.453858939Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\"" Jan 29 11:24:06.453954 containerd[1603]: time="2025-01-29T11:24:06.453936564Z" level=info msg="TearDown network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" successfully" Jan 29 11:24:06.453954 containerd[1603]: time="2025-01-29T11:24:06.453950480Z" level=info msg="StopPodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" returns successfully" Jan 29 11:24:06.454665 containerd[1603]: time="2025-01-29T11:24:06.454305828Z" level=info msg="RemovePodSandbox for \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\"" Jan 29 11:24:06.454665 containerd[1603]: time="2025-01-29T11:24:06.454328490Z" level=info msg="Forcibly stopping sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\"" Jan 29 11:24:06.454665 containerd[1603]: time="2025-01-29T11:24:06.454398472Z" level=info msg="TearDown network for sandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" successfully" Jan 29 11:24:06.457621 containerd[1603]: time="2025-01-29T11:24:06.457588347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.457621 containerd[1603]: time="2025-01-29T11:24:06.457619625Z" level=info msg="RemovePodSandbox \"783cd67993ce2142beb9efbc9836643619a39d642ceb3bee1a6f316cfeac41b6\" returns successfully" Jan 29 11:24:06.457894 containerd[1603]: time="2025-01-29T11:24:06.457875306Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\"" Jan 29 11:24:06.457972 containerd[1603]: time="2025-01-29T11:24:06.457950878Z" level=info msg="TearDown network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" successfully" Jan 29 11:24:06.457972 containerd[1603]: time="2025-01-29T11:24:06.457964663Z" level=info msg="StopPodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" returns successfully" Jan 29 11:24:06.458297 containerd[1603]: time="2025-01-29T11:24:06.458272180Z" level=info msg="RemovePodSandbox for \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\"" Jan 29 11:24:06.458352 containerd[1603]: time="2025-01-29T11:24:06.458302226Z" level=info msg="Forcibly stopping sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\"" Jan 29 11:24:06.458431 containerd[1603]: time="2025-01-29T11:24:06.458391955Z" level=info msg="TearDown network for sandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" successfully" Jan 29 11:24:06.463286 containerd[1603]: time="2025-01-29T11:24:06.463254739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.463323 containerd[1603]: time="2025-01-29T11:24:06.463292832Z" level=info msg="RemovePodSandbox \"5b8285c4167fae43bdf7e7fe0ed4d133c9f02be2bff91f8af9e77fc41e540f40\" returns successfully" Jan 29 11:24:06.463524 containerd[1603]: time="2025-01-29T11:24:06.463497265Z" level=info msg="StopPodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\"" Jan 29 11:24:06.463589 containerd[1603]: time="2025-01-29T11:24:06.463572226Z" level=info msg="TearDown network for sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" successfully" Jan 29 11:24:06.463589 containerd[1603]: time="2025-01-29T11:24:06.463585040Z" level=info msg="StopPodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" returns successfully" Jan 29 11:24:06.463834 containerd[1603]: time="2025-01-29T11:24:06.463812426Z" level=info msg="RemovePodSandbox for \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\"" Jan 29 11:24:06.463834 containerd[1603]: time="2025-01-29T11:24:06.463832724Z" level=info msg="Forcibly stopping sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\"" Jan 29 11:24:06.463925 containerd[1603]: time="2025-01-29T11:24:06.463896033Z" level=info msg="TearDown network for sandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" successfully" Jan 29 11:24:06.467142 containerd[1603]: time="2025-01-29T11:24:06.467112589Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.467205 containerd[1603]: time="2025-01-29T11:24:06.467151552Z" level=info msg="RemovePodSandbox \"2b12aaee3fb24ea4697eaeaacf2b0f77d35d8648f48666d438dc4cc5bb7ec59b\" returns successfully" Jan 29 11:24:06.467440 containerd[1603]: time="2025-01-29T11:24:06.467415227Z" level=info msg="StopPodSandbox for \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\"" Jan 29 11:24:06.467522 containerd[1603]: time="2025-01-29T11:24:06.467493223Z" level=info msg="TearDown network for sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\" successfully" Jan 29 11:24:06.467522 containerd[1603]: time="2025-01-29T11:24:06.467507900Z" level=info msg="StopPodSandbox for \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\" returns successfully" Jan 29 11:24:06.467766 containerd[1603]: time="2025-01-29T11:24:06.467737391Z" level=info msg="RemovePodSandbox for \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\"" Jan 29 11:24:06.467766 containerd[1603]: time="2025-01-29T11:24:06.467762658Z" level=info msg="Forcibly stopping sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\"" Jan 29 11:24:06.467932 containerd[1603]: time="2025-01-29T11:24:06.467834444Z" level=info msg="TearDown network for sandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\" successfully" Jan 29 11:24:06.472180 containerd[1603]: time="2025-01-29T11:24:06.472146494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.472243 containerd[1603]: time="2025-01-29T11:24:06.472184856Z" level=info msg="RemovePodSandbox \"ce8f26a523996a501e944d30e47b8575cb38e2cd7c509e4f70386806f3670256\" returns successfully" Jan 29 11:24:06.472489 containerd[1603]: time="2025-01-29T11:24:06.472460504Z" level=info msg="StopPodSandbox for \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\"" Jan 29 11:24:06.472600 containerd[1603]: time="2025-01-29T11:24:06.472576462Z" level=info msg="TearDown network for sandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\" successfully" Jan 29 11:24:06.472600 containerd[1603]: time="2025-01-29T11:24:06.472598092Z" level=info msg="StopPodSandbox for \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\" returns successfully" Jan 29 11:24:06.472868 containerd[1603]: time="2025-01-29T11:24:06.472834546Z" level=info msg="RemovePodSandbox for \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\"" Jan 29 11:24:06.472868 containerd[1603]: time="2025-01-29T11:24:06.472863370Z" level=info msg="Forcibly stopping sandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\"" Jan 29 11:24:06.472976 containerd[1603]: time="2025-01-29T11:24:06.472941647Z" level=info msg="TearDown network for sandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\" successfully" Jan 29 11:24:06.476974 containerd[1603]: time="2025-01-29T11:24:06.476942595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.476974 containerd[1603]: time="2025-01-29T11:24:06.476982490Z" level=info msg="RemovePodSandbox \"ce68d087c4e47b28f1c1502ce7e569438109b05efbb09bab5d41c09672a66b4a\" returns successfully" Jan 29 11:24:06.477394 containerd[1603]: time="2025-01-29T11:24:06.477244822Z" level=info msg="StopPodSandbox for \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\"" Jan 29 11:24:06.477394 containerd[1603]: time="2025-01-29T11:24:06.477325683Z" level=info msg="TearDown network for sandbox \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\" successfully" Jan 29 11:24:06.477394 containerd[1603]: time="2025-01-29T11:24:06.477337646Z" level=info msg="StopPodSandbox for \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\" returns successfully" Jan 29 11:24:06.477576 containerd[1603]: time="2025-01-29T11:24:06.477540426Z" level=info msg="RemovePodSandbox for \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\"" Jan 29 11:24:06.477576 containerd[1603]: time="2025-01-29T11:24:06.477560313Z" level=info msg="Forcibly stopping sandbox \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\"" Jan 29 11:24:06.477666 containerd[1603]: time="2025-01-29T11:24:06.477625405Z" level=info msg="TearDown network for sandbox \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\" successfully" Jan 29 11:24:06.480864 containerd[1603]: time="2025-01-29T11:24:06.480838024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.480900 containerd[1603]: time="2025-01-29T11:24:06.480882357Z" level=info msg="RemovePodSandbox \"eaba45c931542a37b392ee87f31de0ac03fe2cf52de90a10dde892a615e25c14\" returns successfully" Jan 29 11:24:06.481153 containerd[1603]: time="2025-01-29T11:24:06.481133007Z" level=info msg="StopPodSandbox for \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\"" Jan 29 11:24:06.481249 containerd[1603]: time="2025-01-29T11:24:06.481234508Z" level=info msg="TearDown network for sandbox \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\" successfully" Jan 29 11:24:06.481277 containerd[1603]: time="2025-01-29T11:24:06.481246891Z" level=info msg="StopPodSandbox for \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\" returns successfully" Jan 29 11:24:06.481533 containerd[1603]: time="2025-01-29T11:24:06.481498794Z" level=info msg="RemovePodSandbox for \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\"" Jan 29 11:24:06.481533 containerd[1603]: time="2025-01-29T11:24:06.481519794Z" level=info msg="Forcibly stopping sandbox \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\"" Jan 29 11:24:06.481616 containerd[1603]: time="2025-01-29T11:24:06.481579946Z" level=info msg="TearDown network for sandbox \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\" successfully" Jan 29 11:24:06.485533 containerd[1603]: time="2025-01-29T11:24:06.485499261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.485533 containerd[1603]: time="2025-01-29T11:24:06.485533825Z" level=info msg="RemovePodSandbox \"4b410ff731ba977c67b2166e22cd8bf68b4abd04cf69c804d2cfabe644e7d537\" returns successfully" Jan 29 11:24:06.485811 containerd[1603]: time="2025-01-29T11:24:06.485786068Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\"" Jan 29 11:24:06.485886 containerd[1603]: time="2025-01-29T11:24:06.485869986Z" level=info msg="TearDown network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" successfully" Jan 29 11:24:06.485886 containerd[1603]: time="2025-01-29T11:24:06.485882720Z" level=info msg="StopPodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" returns successfully" Jan 29 11:24:06.486133 containerd[1603]: time="2025-01-29T11:24:06.486108043Z" level=info msg="RemovePodSandbox for \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\"" Jan 29 11:24:06.486133 containerd[1603]: time="2025-01-29T11:24:06.486131557Z" level=info msg="Forcibly stopping sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\"" Jan 29 11:24:06.486239 containerd[1603]: time="2025-01-29T11:24:06.486207410Z" level=info msg="TearDown network for sandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" successfully" Jan 29 11:24:06.489504 containerd[1603]: time="2025-01-29T11:24:06.489476984Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.489569 containerd[1603]: time="2025-01-29T11:24:06.489514565Z" level=info msg="RemovePodSandbox \"1b729f0d5d6baf5aee7d2a2d089bd3955cae0639c7f1b3a996ea88be9c8090fc\" returns successfully" Jan 29 11:24:06.489778 containerd[1603]: time="2025-01-29T11:24:06.489761218Z" level=info msg="StopPodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\"" Jan 29 11:24:06.490013 containerd[1603]: time="2025-01-29T11:24:06.489833634Z" level=info msg="TearDown network for sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" successfully" Jan 29 11:24:06.490013 containerd[1603]: time="2025-01-29T11:24:06.489847330Z" level=info msg="StopPodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" returns successfully" Jan 29 11:24:06.490115 containerd[1603]: time="2025-01-29T11:24:06.490086448Z" level=info msg="RemovePodSandbox for \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\"" Jan 29 11:24:06.490160 containerd[1603]: time="2025-01-29T11:24:06.490116274Z" level=info msg="Forcibly stopping sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\"" Jan 29 11:24:06.490242 containerd[1603]: time="2025-01-29T11:24:06.490195222Z" level=info msg="TearDown network for sandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" successfully" Jan 29 11:24:06.494332 containerd[1603]: time="2025-01-29T11:24:06.494297180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.494332 containerd[1603]: time="2025-01-29T11:24:06.494340762Z" level=info msg="RemovePodSandbox \"f020e221c67b2babca5c6ce0c3afb52e685e9bb6e9d0fe1cf24e4f740b0a04fa\" returns successfully" Jan 29 11:24:06.494677 containerd[1603]: time="2025-01-29T11:24:06.494629283Z" level=info msg="StopPodSandbox for \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\"" Jan 29 11:24:06.494830 containerd[1603]: time="2025-01-29T11:24:06.494797839Z" level=info msg="TearDown network for sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\" successfully" Jan 29 11:24:06.494830 containerd[1603]: time="2025-01-29T11:24:06.494813528Z" level=info msg="StopPodSandbox for \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\" returns successfully" Jan 29 11:24:06.495061 containerd[1603]: time="2025-01-29T11:24:06.495040434Z" level=info msg="RemovePodSandbox for \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\"" Jan 29 11:24:06.495093 containerd[1603]: time="2025-01-29T11:24:06.495062014Z" level=info msg="Forcibly stopping sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\"" Jan 29 11:24:06.495156 containerd[1603]: time="2025-01-29T11:24:06.495124962Z" level=info msg="TearDown network for sandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\" successfully" Jan 29 11:24:06.498665 containerd[1603]: time="2025-01-29T11:24:06.498606415Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.498788 containerd[1603]: time="2025-01-29T11:24:06.498677599Z" level=info msg="RemovePodSandbox \"8d9caa31e6c4b9dfbd654951da439d0bdca7eb5b35cdd3ad949108b601e68175\" returns successfully" Jan 29 11:24:06.499319 containerd[1603]: time="2025-01-29T11:24:06.499289567Z" level=info msg="StopPodSandbox for \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\"" Jan 29 11:24:06.499411 containerd[1603]: time="2025-01-29T11:24:06.499392290Z" level=info msg="TearDown network for sandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\" successfully" Jan 29 11:24:06.499449 containerd[1603]: time="2025-01-29T11:24:06.499412708Z" level=info msg="StopPodSandbox for \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\" returns successfully" Jan 29 11:24:06.499730 containerd[1603]: time="2025-01-29T11:24:06.499709476Z" level=info msg="RemovePodSandbox for \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\"" Jan 29 11:24:06.499788 containerd[1603]: time="2025-01-29T11:24:06.499736096Z" level=info msg="Forcibly stopping sandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\"" Jan 29 11:24:06.499838 containerd[1603]: time="2025-01-29T11:24:06.499805626Z" level=info msg="TearDown network for sandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\" successfully" Jan 29 11:24:06.503118 containerd[1603]: time="2025-01-29T11:24:06.503093655Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.503241 containerd[1603]: time="2025-01-29T11:24:06.503127729Z" level=info msg="RemovePodSandbox \"553b4cf155dd4bec9a65c7b8432384bbc540be9b848ddfb6f966a36f7cf39ad2\" returns successfully" Jan 29 11:24:06.503418 containerd[1603]: time="2025-01-29T11:24:06.503392146Z" level=info msg="StopPodSandbox for \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\"" Jan 29 11:24:06.503484 containerd[1603]: time="2025-01-29T11:24:06.503468770Z" level=info msg="TearDown network for sandbox \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\" successfully" Jan 29 11:24:06.503484 containerd[1603]: time="2025-01-29T11:24:06.503481514Z" level=info msg="StopPodSandbox for \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\" returns successfully" Jan 29 11:24:06.503756 containerd[1603]: time="2025-01-29T11:24:06.503729950Z" level=info msg="RemovePodSandbox for \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\"" Jan 29 11:24:06.503756 containerd[1603]: time="2025-01-29T11:24:06.503748144Z" level=info msg="Forcibly stopping sandbox \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\"" Jan 29 11:24:06.504116 containerd[1603]: time="2025-01-29T11:24:06.503830789Z" level=info msg="TearDown network for sandbox \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\" successfully" Jan 29 11:24:06.507066 containerd[1603]: time="2025-01-29T11:24:06.507035823Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.507066 containerd[1603]: time="2025-01-29T11:24:06.507066661Z" level=info msg="RemovePodSandbox \"08b470268cbf5fa8acee0eaaf8b8e29b6b2be1b527ee3bdd7b5e77a47e8b6bda\" returns successfully" Jan 29 11:24:06.507457 containerd[1603]: time="2025-01-29T11:24:06.507315378Z" level=info msg="StopPodSandbox for \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\"" Jan 29 11:24:06.507457 containerd[1603]: time="2025-01-29T11:24:06.507399345Z" level=info msg="TearDown network for sandbox \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\" successfully" Jan 29 11:24:06.507457 containerd[1603]: time="2025-01-29T11:24:06.507410696Z" level=info msg="StopPodSandbox for \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\" returns successfully" Jan 29 11:24:06.507677 containerd[1603]: time="2025-01-29T11:24:06.507625479Z" level=info msg="RemovePodSandbox for \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\"" Jan 29 11:24:06.507746 containerd[1603]: time="2025-01-29T11:24:06.507681384Z" level=info msg="Forcibly stopping sandbox \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\"" Jan 29 11:24:06.507807 containerd[1603]: time="2025-01-29T11:24:06.507764881Z" level=info msg="TearDown network for sandbox \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\" successfully" Jan 29 11:24:06.511531 containerd[1603]: time="2025-01-29T11:24:06.511499279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.511592 containerd[1603]: time="2025-01-29T11:24:06.511547750Z" level=info msg="RemovePodSandbox \"0ad52c0378c3992ace5f263f5f7e8d89607233de4fdde49c2ad878a4e14fb393\" returns successfully" Jan 29 11:24:06.511815 containerd[1603]: time="2025-01-29T11:24:06.511789774Z" level=info msg="StopPodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\"" Jan 29 11:24:06.511890 containerd[1603]: time="2025-01-29T11:24:06.511867399Z" level=info msg="TearDown network for sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" successfully" Jan 29 11:24:06.511890 containerd[1603]: time="2025-01-29T11:24:06.511878059Z" level=info msg="StopPodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" returns successfully" Jan 29 11:24:06.512107 containerd[1603]: time="2025-01-29T11:24:06.512083144Z" level=info msg="RemovePodSandbox for \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\"" Jan 29 11:24:06.512188 containerd[1603]: time="2025-01-29T11:24:06.512167562Z" level=info msg="Forcibly stopping sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\"" Jan 29 11:24:06.512280 containerd[1603]: time="2025-01-29T11:24:06.512247793Z" level=info msg="TearDown network for sandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" successfully" Jan 29 11:24:06.515823 containerd[1603]: time="2025-01-29T11:24:06.515792424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.515880 containerd[1603]: time="2025-01-29T11:24:06.515835585Z" level=info msg="RemovePodSandbox \"0ae1be8ae91ee9a84aff0afb50b2d3df6354ff618f53e3ab0a66906b46c38b43\" returns successfully" Jan 29 11:24:06.516133 containerd[1603]: time="2025-01-29T11:24:06.516112325Z" level=info msg="StopPodSandbox for \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\"" Jan 29 11:24:06.516215 containerd[1603]: time="2025-01-29T11:24:06.516189720Z" level=info msg="TearDown network for sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\" successfully" Jan 29 11:24:06.516247 containerd[1603]: time="2025-01-29T11:24:06.516211661Z" level=info msg="StopPodSandbox for \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\" returns successfully" Jan 29 11:24:06.516457 containerd[1603]: time="2025-01-29T11:24:06.516440601Z" level=info msg="RemovePodSandbox for \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\"" Jan 29 11:24:06.516514 containerd[1603]: time="2025-01-29T11:24:06.516458404Z" level=info msg="Forcibly stopping sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\"" Jan 29 11:24:06.516539 containerd[1603]: time="2025-01-29T11:24:06.516518887Z" level=info msg="TearDown network for sandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\" successfully" Jan 29 11:24:06.519616 containerd[1603]: time="2025-01-29T11:24:06.519582796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.519616 containerd[1603]: time="2025-01-29T11:24:06.519614796Z" level=info msg="RemovePodSandbox \"73f0cb802a9fbcbb312d7c808f510fff7be4b1a32d0e7ec64ecb00f0652aa293\" returns successfully" Jan 29 11:24:06.519871 containerd[1603]: time="2025-01-29T11:24:06.519846111Z" level=info msg="StopPodSandbox for \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\"" Jan 29 11:24:06.519935 containerd[1603]: time="2025-01-29T11:24:06.519919358Z" level=info msg="TearDown network for sandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\" successfully" Jan 29 11:24:06.519935 containerd[1603]: time="2025-01-29T11:24:06.519931872Z" level=info msg="StopPodSandbox for \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\" returns successfully" Jan 29 11:24:06.520147 containerd[1603]: time="2025-01-29T11:24:06.520120726Z" level=info msg="RemovePodSandbox for \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\"" Jan 29 11:24:06.520186 containerd[1603]: time="2025-01-29T11:24:06.520148839Z" level=info msg="Forcibly stopping sandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\"" Jan 29 11:24:06.520246 containerd[1603]: time="2025-01-29T11:24:06.520218509Z" level=info msg="TearDown network for sandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\" successfully" Jan 29 11:24:06.523286 containerd[1603]: time="2025-01-29T11:24:06.523257191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.523333 containerd[1603]: time="2025-01-29T11:24:06.523289993Z" level=info msg="RemovePodSandbox \"2032318863409a61c21272218b8cd4646ce4719e34502883e5320a494b83f759\" returns successfully" Jan 29 11:24:06.523550 containerd[1603]: time="2025-01-29T11:24:06.523527679Z" level=info msg="StopPodSandbox for \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\"" Jan 29 11:24:06.523640 containerd[1603]: time="2025-01-29T11:24:06.523613039Z" level=info msg="TearDown network for sandbox \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\" successfully" Jan 29 11:24:06.523640 containerd[1603]: time="2025-01-29T11:24:06.523629641Z" level=info msg="StopPodSandbox for \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\" returns successfully" Jan 29 11:24:06.523877 containerd[1603]: time="2025-01-29T11:24:06.523847008Z" level=info msg="RemovePodSandbox for \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\"" Jan 29 11:24:06.523877 containerd[1603]: time="2025-01-29T11:24:06.523870492Z" level=info msg="Forcibly stopping sandbox \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\"" Jan 29 11:24:06.523986 containerd[1603]: time="2025-01-29T11:24:06.523948418Z" level=info msg="TearDown network for sandbox \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\" successfully" Jan 29 11:24:06.527434 containerd[1603]: time="2025-01-29T11:24:06.527402991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.527477 containerd[1603]: time="2025-01-29T11:24:06.527436774Z" level=info msg="RemovePodSandbox \"f23cdcd05d9e9caddc21e6a58029918cb6c0af971f839fafb38dfab76cbbaa75\" returns successfully" Jan 29 11:24:06.527700 containerd[1603]: time="2025-01-29T11:24:06.527675392Z" level=info msg="StopPodSandbox for \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\"" Jan 29 11:24:06.527766 containerd[1603]: time="2025-01-29T11:24:06.527757716Z" level=info msg="TearDown network for sandbox \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\" successfully" Jan 29 11:24:06.527789 containerd[1603]: time="2025-01-29T11:24:06.527767705Z" level=info msg="StopPodSandbox for \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\" returns successfully" Jan 29 11:24:06.527967 containerd[1603]: time="2025-01-29T11:24:06.527943055Z" level=info msg="RemovePodSandbox for \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\"" Jan 29 11:24:06.527967 containerd[1603]: time="2025-01-29T11:24:06.527967451Z" level=info msg="Forcibly stopping sandbox \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\"" Jan 29 11:24:06.528072 containerd[1603]: time="2025-01-29T11:24:06.528032062Z" level=info msg="TearDown network for sandbox \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\" successfully" Jan 29 11:24:06.531464 containerd[1603]: time="2025-01-29T11:24:06.531433103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.531511 containerd[1603]: time="2025-01-29T11:24:06.531473068Z" level=info msg="RemovePodSandbox \"2ba169e6d84bdaf3bedd683972515ae32d6e615e8e74f16e5bfab5a425907221\" returns successfully" Jan 29 11:24:06.531760 containerd[1603]: time="2025-01-29T11:24:06.531715343Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\"" Jan 29 11:24:06.531835 containerd[1603]: time="2025-01-29T11:24:06.531813928Z" level=info msg="TearDown network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" successfully" Jan 29 11:24:06.531875 containerd[1603]: time="2025-01-29T11:24:06.531834667Z" level=info msg="StopPodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" returns successfully" Jan 29 11:24:06.532118 containerd[1603]: time="2025-01-29T11:24:06.532098782Z" level=info msg="RemovePodSandbox for \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\"" Jan 29 11:24:06.532155 containerd[1603]: time="2025-01-29T11:24:06.532120142Z" level=info msg="Forcibly stopping sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\"" Jan 29 11:24:06.532262 containerd[1603]: time="2025-01-29T11:24:06.532228425Z" level=info msg="TearDown network for sandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" successfully" Jan 29 11:24:06.535757 containerd[1603]: time="2025-01-29T11:24:06.535736268Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.538660 containerd[1603]: time="2025-01-29T11:24:06.536188888Z" level=info msg="RemovePodSandbox \"35ec0a7ad28814a933a13455a4947e1aed386d558ea95f28ed9489de69a15ce7\" returns successfully" Jan 29 11:24:06.539059 containerd[1603]: time="2025-01-29T11:24:06.539030880Z" level=info msg="StopPodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\"" Jan 29 11:24:06.539147 containerd[1603]: time="2025-01-29T11:24:06.539127051Z" level=info msg="TearDown network for sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" successfully" Jan 29 11:24:06.539176 containerd[1603]: time="2025-01-29T11:24:06.539145585Z" level=info msg="StopPodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" returns successfully" Jan 29 11:24:06.539674 containerd[1603]: time="2025-01-29T11:24:06.539640393Z" level=info msg="RemovePodSandbox for \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\"" Jan 29 11:24:06.539720 containerd[1603]: time="2025-01-29T11:24:06.539677202Z" level=info msg="Forcibly stopping sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\"" Jan 29 11:24:06.539763 containerd[1603]: time="2025-01-29T11:24:06.539748266Z" level=info msg="TearDown network for sandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" successfully" Jan 29 11:24:06.543697 containerd[1603]: time="2025-01-29T11:24:06.543610915Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.543697 containerd[1603]: time="2025-01-29T11:24:06.543670206Z" level=info msg="RemovePodSandbox \"a5cd909401ea2ce8a9ad6c0d81b6e82a2574daf44c63988f441d2987250650d7\" returns successfully" Jan 29 11:24:06.543918 containerd[1603]: time="2025-01-29T11:24:06.543893925Z" level=info msg="StopPodSandbox for \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\"" Jan 29 11:24:06.544055 containerd[1603]: time="2025-01-29T11:24:06.543999163Z" level=info msg="TearDown network for sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\" successfully" Jan 29 11:24:06.544113 containerd[1603]: time="2025-01-29T11:24:06.544063323Z" level=info msg="StopPodSandbox for \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\" returns successfully" Jan 29 11:24:06.544870 containerd[1603]: time="2025-01-29T11:24:06.544840622Z" level=info msg="RemovePodSandbox for \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\"" Jan 29 11:24:06.544870 containerd[1603]: time="2025-01-29T11:24:06.544868835Z" level=info msg="Forcibly stopping sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\"" Jan 29 11:24:06.545256 containerd[1603]: time="2025-01-29T11:24:06.544935730Z" level=info msg="TearDown network for sandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\" successfully" Jan 29 11:24:06.548554 containerd[1603]: time="2025-01-29T11:24:06.548524334Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.548554 containerd[1603]: time="2025-01-29T11:24:06.548556905Z" level=info msg="RemovePodSandbox \"d50a44cac0dffef490c70316d71bd161938d318913bb87ae97357f6734b82629\" returns successfully" Jan 29 11:24:06.548998 containerd[1603]: time="2025-01-29T11:24:06.548841278Z" level=info msg="StopPodSandbox for \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\"" Jan 29 11:24:06.548998 containerd[1603]: time="2025-01-29T11:24:06.548924636Z" level=info msg="TearDown network for sandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\" successfully" Jan 29 11:24:06.548998 containerd[1603]: time="2025-01-29T11:24:06.548952097Z" level=info msg="StopPodSandbox for \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\" returns successfully" Jan 29 11:24:06.549195 containerd[1603]: time="2025-01-29T11:24:06.549161189Z" level=info msg="RemovePodSandbox for \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\"" Jan 29 11:24:06.549195 containerd[1603]: time="2025-01-29T11:24:06.549182810Z" level=info msg="Forcibly stopping sandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\"" Jan 29 11:24:06.549284 containerd[1603]: time="2025-01-29T11:24:06.549256919Z" level=info msg="TearDown network for sandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\" successfully" Jan 29 11:24:06.552741 containerd[1603]: time="2025-01-29T11:24:06.552715309Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.552781 containerd[1603]: time="2025-01-29T11:24:06.552749503Z" level=info msg="RemovePodSandbox \"283981a716d656053a5d78ffa5bfc7b07b405a6ec2dbc6348922b5916b3747e8\" returns successfully" Jan 29 11:24:06.553041 containerd[1603]: time="2025-01-29T11:24:06.553016513Z" level=info msg="StopPodSandbox for \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\"" Jan 29 11:24:06.553111 containerd[1603]: time="2025-01-29T11:24:06.553094981Z" level=info msg="TearDown network for sandbox \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\" successfully" Jan 29 11:24:06.553111 containerd[1603]: time="2025-01-29T11:24:06.553108265Z" level=info msg="StopPodSandbox for \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\" returns successfully" Jan 29 11:24:06.553343 containerd[1603]: time="2025-01-29T11:24:06.553322507Z" level=info msg="RemovePodSandbox for \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\"" Jan 29 11:24:06.553384 containerd[1603]: time="2025-01-29T11:24:06.553346703Z" level=info msg="Forcibly stopping sandbox \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\"" Jan 29 11:24:06.553439 containerd[1603]: time="2025-01-29T11:24:06.553413548Z" level=info msg="TearDown network for sandbox \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\" successfully" Jan 29 11:24:06.556698 containerd[1603]: time="2025-01-29T11:24:06.556666462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.556790 containerd[1603]: time="2025-01-29T11:24:06.556708451Z" level=info msg="RemovePodSandbox \"8201816fca61c81413c8180d51f6d5ba8a8f673316e5c36234358ce2d476bd76\" returns successfully" Jan 29 11:24:06.556974 containerd[1603]: time="2025-01-29T11:24:06.556937380Z" level=info msg="StopPodSandbox for \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\"" Jan 29 11:24:06.557054 containerd[1603]: time="2025-01-29T11:24:06.557024784Z" level=info msg="TearDown network for sandbox \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\" successfully" Jan 29 11:24:06.557054 containerd[1603]: time="2025-01-29T11:24:06.557052937Z" level=info msg="StopPodSandbox for \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\" returns successfully" Jan 29 11:24:06.558084 containerd[1603]: time="2025-01-29T11:24:06.557276297Z" level=info msg="RemovePodSandbox for \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\"" Jan 29 11:24:06.558084 containerd[1603]: time="2025-01-29T11:24:06.557302125Z" level=info msg="Forcibly stopping sandbox \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\"" Jan 29 11:24:06.558084 containerd[1603]: time="2025-01-29T11:24:06.557365885Z" level=info msg="TearDown network for sandbox \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\" successfully" Jan 29 11:24:06.560511 containerd[1603]: time="2025-01-29T11:24:06.560484827Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.560565 containerd[1603]: time="2025-01-29T11:24:06.560517058Z" level=info msg="RemovePodSandbox \"0a0b8f5fbc61b38d04a02e0696e72080203cbaa74c453a5c31bc4173186e5e12\" returns successfully" Jan 29 11:24:06.560829 containerd[1603]: time="2025-01-29T11:24:06.560789950Z" level=info msg="StopPodSandbox for \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\"" Jan 29 11:24:06.560900 containerd[1603]: time="2025-01-29T11:24:06.560882985Z" level=info msg="TearDown network for sandbox \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\" successfully" Jan 29 11:24:06.560900 containerd[1603]: time="2025-01-29T11:24:06.560896921Z" level=info msg="StopPodSandbox for \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\" returns successfully" Jan 29 11:24:06.561227 containerd[1603]: time="2025-01-29T11:24:06.561206251Z" level=info msg="RemovePodSandbox for \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\"" Jan 29 11:24:06.561334 containerd[1603]: time="2025-01-29T11:24:06.561230476Z" level=info msg="Forcibly stopping sandbox \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\"" Jan 29 11:24:06.561334 containerd[1603]: time="2025-01-29T11:24:06.561285039Z" level=info msg="TearDown network for sandbox \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\" successfully" Jan 29 11:24:06.565002 containerd[1603]: time="2025-01-29T11:24:06.564970094Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.565055 containerd[1603]: time="2025-01-29T11:24:06.565026780Z" level=info msg="RemovePodSandbox \"01b9948813d86a342c856d2a2e178a992786c11c149ebaf66eaea206f0416ece\" returns successfully" Jan 29 11:24:06.565415 containerd[1603]: time="2025-01-29T11:24:06.565368622Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\"" Jan 29 11:24:06.565497 containerd[1603]: time="2025-01-29T11:24:06.565474050Z" level=info msg="TearDown network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" successfully" Jan 29 11:24:06.565536 containerd[1603]: time="2025-01-29T11:24:06.565494779Z" level=info msg="StopPodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" returns successfully" Jan 29 11:24:06.565811 containerd[1603]: time="2025-01-29T11:24:06.565756960Z" level=info msg="RemovePodSandbox for \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\"" Jan 29 11:24:06.565811 containerd[1603]: time="2025-01-29T11:24:06.565790082Z" level=info msg="Forcibly stopping sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\"" Jan 29 11:24:06.565885 containerd[1603]: time="2025-01-29T11:24:06.565859803Z" level=info msg="TearDown network for sandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" successfully" Jan 29 11:24:06.571096 containerd[1603]: time="2025-01-29T11:24:06.571063358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.571227 containerd[1603]: time="2025-01-29T11:24:06.571108402Z" level=info msg="RemovePodSandbox \"cc80574bbfd01ffb3b29f049fbf5a8971a1252421f164a72de9457e75c4742b1\" returns successfully" Jan 29 11:24:06.571404 containerd[1603]: time="2025-01-29T11:24:06.571346459Z" level=info msg="StopPodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\"" Jan 29 11:24:06.571445 containerd[1603]: time="2025-01-29T11:24:06.571419405Z" level=info msg="TearDown network for sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" successfully" Jan 29 11:24:06.571445 containerd[1603]: time="2025-01-29T11:24:06.571428502Z" level=info msg="StopPodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" returns successfully" Jan 29 11:24:06.571780 containerd[1603]: time="2025-01-29T11:24:06.571744857Z" level=info msg="RemovePodSandbox for \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\"" Jan 29 11:24:06.571780 containerd[1603]: time="2025-01-29T11:24:06.571772709Z" level=info msg="Forcibly stopping sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\"" Jan 29 11:24:06.571883 containerd[1603]: time="2025-01-29T11:24:06.571841067Z" level=info msg="TearDown network for sandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" successfully" Jan 29 11:24:06.575612 containerd[1603]: time="2025-01-29T11:24:06.575582568Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.575692 containerd[1603]: time="2025-01-29T11:24:06.575639264Z" level=info msg="RemovePodSandbox \"8fe8c5cbb386ed50d86378de70ecca3abc1618b7373ace6a0ed5bdc86ade1a2e\" returns successfully" Jan 29 11:24:06.576025 containerd[1603]: time="2025-01-29T11:24:06.576003718Z" level=info msg="StopPodSandbox for \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\"" Jan 29 11:24:06.576569 containerd[1603]: time="2025-01-29T11:24:06.576472337Z" level=info msg="TearDown network for sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\" successfully" Jan 29 11:24:06.576569 containerd[1603]: time="2025-01-29T11:24:06.576519145Z" level=info msg="StopPodSandbox for \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\" returns successfully" Jan 29 11:24:06.576832 containerd[1603]: time="2025-01-29T11:24:06.576804701Z" level=info msg="RemovePodSandbox for \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\"" Jan 29 11:24:06.576876 containerd[1603]: time="2025-01-29T11:24:06.576838755Z" level=info msg="Forcibly stopping sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\"" Jan 29 11:24:06.577048 containerd[1603]: time="2025-01-29T11:24:06.576994978Z" level=info msg="TearDown network for sandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\" successfully" Jan 29 11:24:06.581027 containerd[1603]: time="2025-01-29T11:24:06.580929621Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.581027 containerd[1603]: time="2025-01-29T11:24:06.580969105Z" level=info msg="RemovePodSandbox \"fd73b4a6b75834fb609bf41c57e37fd8e8c0981947c698afb46226232dc313a3\" returns successfully" Jan 29 11:24:06.581402 containerd[1603]: time="2025-01-29T11:24:06.581374325Z" level=info msg="StopPodSandbox for \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\"" Jan 29 11:24:06.581496 containerd[1603]: time="2025-01-29T11:24:06.581460417Z" level=info msg="TearDown network for sandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\" successfully" Jan 29 11:24:06.581496 containerd[1603]: time="2025-01-29T11:24:06.581472309Z" level=info msg="StopPodSandbox for \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\" returns successfully" Jan 29 11:24:06.581779 containerd[1603]: time="2025-01-29T11:24:06.581737287Z" level=info msg="RemovePodSandbox for \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\"" Jan 29 11:24:06.581779 containerd[1603]: time="2025-01-29T11:24:06.581762494Z" level=info msg="Forcibly stopping sandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\"" Jan 29 11:24:06.581921 containerd[1603]: time="2025-01-29T11:24:06.581826464Z" level=info msg="TearDown network for sandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\" successfully" Jan 29 11:24:06.585402 containerd[1603]: time="2025-01-29T11:24:06.585369513Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.585471 containerd[1603]: time="2025-01-29T11:24:06.585411882Z" level=info msg="RemovePodSandbox \"1d77bf54e56ec52d0e23009e2e5d21f1a100fee03b4c3824e2f8e0871e8822e4\" returns successfully" Jan 29 11:24:06.585742 containerd[1603]: time="2025-01-29T11:24:06.585692969Z" level=info msg="StopPodSandbox for \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\"" Jan 29 11:24:06.585827 containerd[1603]: time="2025-01-29T11:24:06.585800010Z" level=info msg="TearDown network for sandbox \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\" successfully" Jan 29 11:24:06.585827 containerd[1603]: time="2025-01-29T11:24:06.585811872Z" level=info msg="StopPodSandbox for \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\" returns successfully" Jan 29 11:24:06.586058 containerd[1603]: time="2025-01-29T11:24:06.586033839Z" level=info msg="RemovePodSandbox for \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\"" Jan 29 11:24:06.586058 containerd[1603]: time="2025-01-29T11:24:06.586057944Z" level=info msg="Forcibly stopping sandbox \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\"" Jan 29 11:24:06.586150 containerd[1603]: time="2025-01-29T11:24:06.586124078Z" level=info msg="TearDown network for sandbox \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\" successfully" Jan 29 11:24:06.589710 containerd[1603]: time="2025-01-29T11:24:06.589674410Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.589710 containerd[1603]: time="2025-01-29T11:24:06.589709396Z" level=info msg="RemovePodSandbox \"9ef868550e31b59e4a32981a7d1b5578ce4c81b59042f76ab0ce4e3d0628f15b\" returns successfully" Jan 29 11:24:06.590423 containerd[1603]: time="2025-01-29T11:24:06.590401845Z" level=info msg="StopPodSandbox for \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\"" Jan 29 11:24:06.590504 containerd[1603]: time="2025-01-29T11:24:06.590478419Z" level=info msg="TearDown network for sandbox \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\" successfully" Jan 29 11:24:06.590504 containerd[1603]: time="2025-01-29T11:24:06.590488197Z" level=info msg="StopPodSandbox for \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\" returns successfully" Jan 29 11:24:06.590791 containerd[1603]: time="2025-01-29T11:24:06.590762643Z" level=info msg="RemovePodSandbox for \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\"" Jan 29 11:24:06.590863 containerd[1603]: time="2025-01-29T11:24:06.590797899Z" level=info msg="Forcibly stopping sandbox \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\"" Jan 29 11:24:06.590932 containerd[1603]: time="2025-01-29T11:24:06.590887818Z" level=info msg="TearDown network for sandbox \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\" successfully" Jan 29 11:24:06.594253 containerd[1603]: time="2025-01-29T11:24:06.594227063Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:24:06.594317 containerd[1603]: time="2025-01-29T11:24:06.594277738Z" level=info msg="RemovePodSandbox \"8c761694cd29a4c74733aaeb1fe337187162d15cf32fd33ee9a8abfb0289605e\" returns successfully" Jan 29 11:24:06.798113 kubelet[2833]: E0129 11:24:06.797986 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:06.809461 containerd[1603]: time="2025-01-29T11:24:06.809415925Z" level=info msg="CreateContainer within sandbox \"bc1427229bd4c73082584307d9d9dc7fcdad56ae70fc952981e8d1ba4ea9283c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:24:06.823553 containerd[1603]: time="2025-01-29T11:24:06.823504772Z" level=info msg="CreateContainer within sandbox \"bc1427229bd4c73082584307d9d9dc7fcdad56ae70fc952981e8d1ba4ea9283c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e4ff8a51010a286fc8af7c852480a36745b434d25d814180d56896603334f8cb\"" Jan 29 11:24:06.824076 containerd[1603]: time="2025-01-29T11:24:06.823991626Z" level=info msg="StartContainer for \"e4ff8a51010a286fc8af7c852480a36745b434d25d814180d56896603334f8cb\"" Jan 29 11:24:06.889325 containerd[1603]: time="2025-01-29T11:24:06.889268484Z" level=info msg="StartContainer for \"e4ff8a51010a286fc8af7c852480a36745b434d25d814180d56896603334f8cb\" returns successfully" Jan 29 11:24:07.801238 kubelet[2833]: E0129 11:24:07.801201 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:07.818876 kubelet[2833]: I0129 11:24:07.818821 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6lnzf" podStartSLOduration=4.818802115 podStartE2EDuration="4.818802115s" podCreationTimestamp="2025-01-29 11:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:24:07.817954434 +0000 UTC m=+61.623010539" watchObservedRunningTime="2025-01-29 11:24:07.818802115 +0000 UTC m=+61.623858220" Jan 29 11:24:07.891244 containerd[1603]: time="2025-01-29T11:24:07.891178477Z" level=info msg="shim disconnected" id=cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf namespace=k8s.io Jan 29 11:24:07.891244 containerd[1603]: time="2025-01-29T11:24:07.891235484Z" level=warning msg="cleaning up after shim disconnected" id=cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf namespace=k8s.io Jan 29 11:24:07.891244 containerd[1603]: time="2025-01-29T11:24:07.891244711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:07.892691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf-rootfs.mount: Deactivated successfully. Jan 29 11:24:07.931875 containerd[1603]: time="2025-01-29T11:24:07.931814157Z" level=info msg="StopContainer for \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\" returns successfully" Jan 29 11:24:07.932377 containerd[1603]: time="2025-01-29T11:24:07.932346720Z" level=info msg="StopPodSandbox for \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\"" Jan 29 11:24:07.932509 containerd[1603]: time="2025-01-29T11:24:07.932382007Z" level=info msg="Container to stop \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:24:07.935427 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f-shm.mount: Deactivated successfully. Jan 29 11:24:07.962179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f-rootfs.mount: Deactivated successfully. Jan 29 11:24:07.963232 containerd[1603]: time="2025-01-29T11:24:07.963136322Z" level=info msg="shim disconnected" id=556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f namespace=k8s.io Jan 29 11:24:07.963232 containerd[1603]: time="2025-01-29T11:24:07.963227835Z" level=warning msg="cleaning up after shim disconnected" id=556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f namespace=k8s.io Jan 29 11:24:07.963330 containerd[1603]: time="2025-01-29T11:24:07.963237623Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:07.990687 containerd[1603]: time="2025-01-29T11:24:07.990609229Z" level=info msg="TearDown network for sandbox \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\" successfully" Jan 29 11:24:07.990687 containerd[1603]: time="2025-01-29T11:24:07.990677408Z" level=info msg="StopPodSandbox for \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\" returns successfully" Jan 29 11:24:08.102682 kubelet[2833]: I0129 11:24:08.101367 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/659fbdea-0af7-4cb8-bb83-31663ca81960-tigera-ca-bundle\") pod \"659fbdea-0af7-4cb8-bb83-31663ca81960\" (UID: \"659fbdea-0af7-4cb8-bb83-31663ca81960\") " Jan 29 11:24:08.102682 kubelet[2833]: I0129 11:24:08.101421 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5jtq\" (UniqueName: \"kubernetes.io/projected/659fbdea-0af7-4cb8-bb83-31663ca81960-kube-api-access-t5jtq\") pod \"659fbdea-0af7-4cb8-bb83-31663ca81960\" (UID: \"659fbdea-0af7-4cb8-bb83-31663ca81960\") " Jan 29 11:24:08.102682 kubelet[2833]: I0129 11:24:08.101451 2833 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/659fbdea-0af7-4cb8-bb83-31663ca81960-typha-certs\") pod \"659fbdea-0af7-4cb8-bb83-31663ca81960\" (UID: \"659fbdea-0af7-4cb8-bb83-31663ca81960\") " Jan 29 11:24:08.108762 kubelet[2833]: I0129 11:24:08.108688 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659fbdea-0af7-4cb8-bb83-31663ca81960-kube-api-access-t5jtq" (OuterVolumeSpecName: "kube-api-access-t5jtq") pod "659fbdea-0af7-4cb8-bb83-31663ca81960" (UID: "659fbdea-0af7-4cb8-bb83-31663ca81960"). InnerVolumeSpecName "kube-api-access-t5jtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:24:08.112337 kubelet[2833]: I0129 11:24:08.110776 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659fbdea-0af7-4cb8-bb83-31663ca81960-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "659fbdea-0af7-4cb8-bb83-31663ca81960" (UID: "659fbdea-0af7-4cb8-bb83-31663ca81960"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:24:08.112511 kubelet[2833]: I0129 11:24:08.111413 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/659fbdea-0af7-4cb8-bb83-31663ca81960-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "659fbdea-0af7-4cb8-bb83-31663ca81960" (UID: "659fbdea-0af7-4cb8-bb83-31663ca81960"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:24:08.202334 kubelet[2833]: I0129 11:24:08.202285 2833 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/659fbdea-0af7-4cb8-bb83-31663ca81960-typha-certs\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:08.202334 kubelet[2833]: I0129 11:24:08.202321 2833 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/659fbdea-0af7-4cb8-bb83-31663ca81960-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:08.202334 kubelet[2833]: I0129 11:24:08.202333 2833 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-t5jtq\" (UniqueName: \"kubernetes.io/projected/659fbdea-0af7-4cb8-bb83-31663ca81960-kube-api-access-t5jtq\") on node \"localhost\" DevicePath \"\"" Jan 29 11:24:08.804451 kubelet[2833]: E0129 11:24:08.804408 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:08.806974 kubelet[2833]: I0129 11:24:08.805495 2833 scope.go:117] "RemoveContainer" containerID="cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf" Jan 29 11:24:08.808823 containerd[1603]: time="2025-01-29T11:24:08.808784582Z" level=info msg="RemoveContainer for \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\"" Jan 29 11:24:08.808936 systemd[1]: var-lib-kubelet-pods-659fbdea\x2d0af7\x2d4cb8\x2dbb83\x2d31663ca81960-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 29 11:24:08.809206 systemd[1]: var-lib-kubelet-pods-659fbdea\x2d0af7\x2d4cb8\x2dbb83\x2d31663ca81960-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt5jtq.mount: Deactivated successfully. Jan 29 11:24:08.809399 systemd[1]: var-lib-kubelet-pods-659fbdea\x2d0af7\x2d4cb8\x2dbb83\x2d31663ca81960-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 29 11:24:08.828561 systemd[1]: run-containerd-runc-k8s.io-e4ff8a51010a286fc8af7c852480a36745b434d25d814180d56896603334f8cb-runc.9Jcq4l.mount: Deactivated successfully. Jan 29 11:24:08.942372 containerd[1603]: time="2025-01-29T11:24:08.942327057Z" level=info msg="RemoveContainer for \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\" returns successfully" Jan 29 11:24:08.942808 kubelet[2833]: I0129 11:24:08.942502 2833 scope.go:117] "RemoveContainer" containerID="cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf" Jan 29 11:24:08.942877 containerd[1603]: time="2025-01-29T11:24:08.942824208Z" level=error msg="ContainerStatus for \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\": not found" Jan 29 11:24:08.943052 kubelet[2833]: E0129 11:24:08.943030 2833 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\": not found" containerID="cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf" Jan 29 11:24:08.943083 kubelet[2833]: I0129 11:24:08.943057 2833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf"} err="failed to get container status \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"cfb1a20f25d460bb031ae2d9c2d4e67646b5957042c7c73ff618848d9a9cb5cf\": not found" Jan 29 11:24:10.281251 kubelet[2833]: I0129 11:24:10.281197 2833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="659fbdea-0af7-4cb8-bb83-31663ca81960" path="/var/lib/kubelet/pods/659fbdea-0af7-4cb8-bb83-31663ca81960/volumes" Jan 29 11:24:11.379033 systemd[1]: Started sshd@19-10.0.0.145:22-10.0.0.1:57912.service - OpenSSH per-connection server daemon (10.0.0.1:57912). Jan 29 11:24:11.450090 sshd[7042]: Accepted publickey for core from 10.0.0.1 port 57912 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:11.452047 sshd-session[7042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:11.456565 systemd-logind[1584]: New session 20 of user core. Jan 29 11:24:11.464969 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:24:11.594724 sshd[7060]: Connection closed by 10.0.0.1 port 57912 Jan 29 11:24:11.595110 sshd-session[7042]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:11.599227 systemd[1]: sshd@19-10.0.0.145:22-10.0.0.1:57912.service: Deactivated successfully. Jan 29 11:24:11.601712 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:24:11.602377 systemd-logind[1584]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:24:11.603266 systemd-logind[1584]: Removed session 20. Jan 29 11:24:14.912348 kubelet[2833]: I0129 11:24:14.912310 2833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:24:16.606998 systemd[1]: Started sshd@20-10.0.0.145:22-10.0.0.1:33904.service - OpenSSH per-connection server daemon (10.0.0.1:33904). Jan 29 11:24:16.646662 sshd[7179]: Accepted publickey for core from 10.0.0.1 port 33904 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:16.648297 sshd-session[7179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:16.652462 systemd-logind[1584]: New session 21 of user core. Jan 29 11:24:16.661977 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:24:16.792552 sshd[7182]: Connection closed by 10.0.0.1 port 33904 Jan 29 11:24:16.792956 sshd-session[7179]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:16.796716 systemd[1]: sshd@20-10.0.0.145:22-10.0.0.1:33904.service: Deactivated successfully. Jan 29 11:24:16.801170 systemd-logind[1584]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:24:16.801588 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:24:16.803265 systemd-logind[1584]: Removed session 21. Jan 29 11:24:21.809879 systemd[1]: Started sshd@21-10.0.0.145:22-10.0.0.1:33912.service - OpenSSH per-connection server daemon (10.0.0.1:33912). Jan 29 11:24:21.887347 sshd[7357]: Accepted publickey for core from 10.0.0.1 port 33912 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:21.889306 sshd-session[7357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:21.893658 systemd-logind[1584]: New session 22 of user core. Jan 29 11:24:21.899908 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:24:22.019996 sshd[7360]: Connection closed by 10.0.0.1 port 33912 Jan 29 11:24:22.020328 sshd-session[7357]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:22.024631 systemd[1]: sshd@21-10.0.0.145:22-10.0.0.1:33912.service: Deactivated successfully. Jan 29 11:24:22.027089 systemd-logind[1584]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:24:22.027158 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:24:22.028159 systemd-logind[1584]: Removed session 22. Jan 29 11:24:27.029856 systemd[1]: Started sshd@22-10.0.0.145:22-10.0.0.1:46786.service - OpenSSH per-connection server daemon (10.0.0.1:46786). Jan 29 11:24:27.067505 sshd[7383]: Accepted publickey for core from 10.0.0.1 port 46786 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:27.069110 sshd-session[7383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:27.072861 systemd-logind[1584]: New session 23 of user core. Jan 29 11:24:27.082018 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:24:27.195836 sshd[7386]: Connection closed by 10.0.0.1 port 46786 Jan 29 11:24:27.196159 sshd-session[7383]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:27.200268 systemd[1]: sshd@22-10.0.0.145:22-10.0.0.1:46786.service: Deactivated successfully. Jan 29 11:24:27.202668 systemd-logind[1584]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:24:27.202683 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:24:27.203590 systemd-logind[1584]: Removed session 23. Jan 29 11:24:32.208218 systemd[1]: Started sshd@23-10.0.0.145:22-10.0.0.1:46796.service - OpenSSH per-connection server daemon (10.0.0.1:46796). Jan 29 11:24:32.250804 sshd[7406]: Accepted publickey for core from 10.0.0.1 port 46796 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:32.252277 sshd-session[7406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:32.256074 systemd-logind[1584]: New session 24 of user core. Jan 29 11:24:32.263886 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:24:32.368445 sshd[7409]: Connection closed by 10.0.0.1 port 46796 Jan 29 11:24:32.368807 sshd-session[7406]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:32.372733 systemd[1]: sshd@23-10.0.0.145:22-10.0.0.1:46796.service: Deactivated successfully. Jan 29 11:24:32.375708 systemd-logind[1584]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:24:32.375805 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:24:32.377379 systemd-logind[1584]: Removed session 24. Jan 29 11:24:34.052665 kubelet[2833]: E0129 11:24:34.050807 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:37.376894 systemd[1]: Started sshd@24-10.0.0.145:22-10.0.0.1:47854.service - OpenSSH per-connection server daemon (10.0.0.1:47854). Jan 29 11:24:37.423939 sshd[7443]: Accepted publickey for core from 10.0.0.1 port 47854 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:37.425727 sshd-session[7443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:37.429931 systemd-logind[1584]: New session 25 of user core. Jan 29 11:24:37.437966 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:24:37.561999 sshd[7446]: Connection closed by 10.0.0.1 port 47854 Jan 29 11:24:37.562380 sshd-session[7443]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:37.567628 systemd[1]: sshd@24-10.0.0.145:22-10.0.0.1:47854.service: Deactivated successfully. Jan 29 11:24:37.571149 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:24:37.572638 systemd-logind[1584]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:24:37.573871 systemd-logind[1584]: Removed session 25. Jan 29 11:24:38.278621 kubelet[2833]: E0129 11:24:38.278576 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:40.279036 kubelet[2833]: E0129 11:24:40.278877 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:42.580060 systemd[1]: Started sshd@25-10.0.0.145:22-10.0.0.1:47860.service - OpenSSH per-connection server daemon (10.0.0.1:47860). Jan 29 11:24:42.620189 sshd[7467]: Accepted publickey for core from 10.0.0.1 port 47860 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:42.621732 sshd-session[7467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:42.626087 systemd-logind[1584]: New session 26 of user core. Jan 29 11:24:42.634977 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:24:42.742172 sshd[7470]: Connection closed by 10.0.0.1 port 47860 Jan 29 11:24:42.742516 sshd-session[7467]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:42.746323 systemd[1]: sshd@25-10.0.0.145:22-10.0.0.1:47860.service: Deactivated successfully. Jan 29 11:24:42.748636 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:24:42.749260 systemd-logind[1584]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:24:42.750088 systemd-logind[1584]: Removed session 26. Jan 29 11:24:43.278146 kubelet[2833]: E0129 11:24:43.278107 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:47.757857 systemd[1]: Started sshd@26-10.0.0.145:22-10.0.0.1:43716.service - OpenSSH per-connection server daemon (10.0.0.1:43716). Jan 29 11:24:47.800016 sshd[7482]: Accepted publickey for core from 10.0.0.1 port 43716 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:47.801747 sshd-session[7482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:47.805810 systemd-logind[1584]: New session 27 of user core. Jan 29 11:24:47.816911 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:24:48.096689 sshd[7485]: Connection closed by 10.0.0.1 port 43716 Jan 29 11:24:48.097118 sshd-session[7482]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:48.103599 systemd[1]: sshd@26-10.0.0.145:22-10.0.0.1:43716.service: Deactivated successfully. Jan 29 11:24:48.105906 systemd-logind[1584]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:24:48.105970 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:24:48.107313 systemd-logind[1584]: Removed session 27. Jan 29 11:24:48.278549 kubelet[2833]: E0129 11:24:48.278513 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:24:53.116926 systemd[1]: Started sshd@27-10.0.0.145:22-10.0.0.1:43720.service - OpenSSH per-connection server daemon (10.0.0.1:43720). Jan 29 11:24:53.154795 sshd[7510]: Accepted publickey for core from 10.0.0.1 port 43720 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:53.156321 sshd-session[7510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:53.160444 systemd-logind[1584]: New session 28 of user core. Jan 29 11:24:53.166934 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 11:24:53.284340 sshd[7513]: Connection closed by 10.0.0.1 port 43720 Jan 29 11:24:53.284702 sshd-session[7510]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:53.289525 systemd[1]: sshd@27-10.0.0.145:22-10.0.0.1:43720.service: Deactivated successfully. Jan 29 11:24:53.292540 systemd-logind[1584]: Session 28 logged out. Waiting for processes to exit. Jan 29 11:24:53.292677 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 11:24:53.294025 systemd-logind[1584]: Removed session 28. Jan 29 11:24:58.296868 systemd[1]: Started sshd@28-10.0.0.145:22-10.0.0.1:34484.service - OpenSSH per-connection server daemon (10.0.0.1:34484). Jan 29 11:24:58.334482 sshd[7526]: Accepted publickey for core from 10.0.0.1 port 34484 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:24:58.335870 sshd-session[7526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:24:58.339570 systemd-logind[1584]: New session 29 of user core. Jan 29 11:24:58.349909 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 11:24:58.455038 sshd[7529]: Connection closed by 10.0.0.1 port 34484 Jan 29 11:24:58.455384 sshd-session[7526]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:58.459477 systemd[1]: sshd@28-10.0.0.145:22-10.0.0.1:34484.service: Deactivated successfully. Jan 29 11:24:58.461954 systemd-logind[1584]: Session 29 logged out. Waiting for processes to exit. Jan 29 11:24:58.462013 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 11:24:58.463207 systemd-logind[1584]: Removed session 29. Jan 29 11:25:03.469945 systemd[1]: Started sshd@29-10.0.0.145:22-10.0.0.1:34490.service - OpenSSH per-connection server daemon (10.0.0.1:34490). Jan 29 11:25:03.507701 sshd[7551]: Accepted publickey for core from 10.0.0.1 port 34490 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:25:03.509019 sshd-session[7551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:03.513345 systemd-logind[1584]: New session 30 of user core. Jan 29 11:25:03.518901 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 29 11:25:03.644605 sshd[7554]: Connection closed by 10.0.0.1 port 34490 Jan 29 11:25:03.644988 sshd-session[7551]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:03.650048 systemd[1]: sshd@29-10.0.0.145:22-10.0.0.1:34490.service: Deactivated successfully. Jan 29 11:25:03.652758 systemd-logind[1584]: Session 30 logged out. Waiting for processes to exit. Jan 29 11:25:03.652809 systemd[1]: session-30.scope: Deactivated successfully. Jan 29 11:25:03.654557 systemd-logind[1584]: Removed session 30. Jan 29 11:25:06.597978 containerd[1603]: time="2025-01-29T11:25:06.597935418Z" level=info msg="StopPodSandbox for \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\"" Jan 29 11:25:06.598512 containerd[1603]: time="2025-01-29T11:25:06.598023905Z" level=info msg="TearDown network for sandbox \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\" successfully" Jan 29 11:25:06.598512 containerd[1603]: time="2025-01-29T11:25:06.598034555Z" level=info msg="StopPodSandbox for \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\" returns successfully" Jan 29 11:25:06.598512 containerd[1603]: time="2025-01-29T11:25:06.598238391Z" level=info msg="RemovePodSandbox for \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\"" Jan 29 11:25:06.598512 containerd[1603]: time="2025-01-29T11:25:06.598255323Z" level=info msg="Forcibly stopping sandbox \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\"" Jan 29 11:25:06.598512 containerd[1603]: time="2025-01-29T11:25:06.598295198Z" level=info msg="TearDown network for sandbox \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\" successfully" Jan 29 11:25:06.687245 containerd[1603]: time="2025-01-29T11:25:06.687193959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:25:06.687397 containerd[1603]: time="2025-01-29T11:25:06.687262708Z" level=info msg="RemovePodSandbox \"556d0b7c50a9a5271a19fb71da0a69165f88010ff5ab7ab4ad6e75034179bf1f\" returns successfully" Jan 29 11:25:08.655961 systemd[1]: Started sshd@30-10.0.0.145:22-10.0.0.1:38630.service - OpenSSH per-connection server daemon (10.0.0.1:38630). Jan 29 11:25:08.697741 sshd[7591]: Accepted publickey for core from 10.0.0.1 port 38630 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:25:08.699227 sshd-session[7591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:08.703078 systemd-logind[1584]: New session 31 of user core. Jan 29 11:25:08.712066 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 29 11:25:08.819339 sshd[7594]: Connection closed by 10.0.0.1 port 38630 Jan 29 11:25:08.819785 sshd-session[7591]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:08.824229 systemd[1]: sshd@30-10.0.0.145:22-10.0.0.1:38630.service: Deactivated successfully. Jan 29 11:25:08.826627 systemd-logind[1584]: Session 31 logged out. Waiting for processes to exit. Jan 29 11:25:08.826688 systemd[1]: session-31.scope: Deactivated successfully. Jan 29 11:25:08.827763 systemd-logind[1584]: Removed session 31. Jan 29 11:25:10.278392 kubelet[2833]: E0129 11:25:10.278338 2833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"