Mar 2 13:25:53.730679 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 10:28:24 -00 2026 Mar 2 13:25:53.730857 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 13:25:53.730874 kernel: BIOS-provided physical RAM map: Mar 2 13:25:53.730890 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 13:25:53.730899 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 13:25:53.730908 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 13:25:53.730918 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 13:25:53.731067 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 13:25:53.731487 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 2 13:25:53.731501 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 2 13:25:53.731510 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 2 13:25:53.731519 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 2 13:25:53.748106 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 2 13:25:53.748166 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 2 13:25:53.748177 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 2 13:25:53.748187 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 13:25:53.748253 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 2 13:25:53.748336 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 2 13:25:53.748347 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 2 13:25:53.748357 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 2 13:25:53.748367 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 2 13:25:53.748376 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 13:25:53.748385 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 2 13:25:53.748395 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 13:25:53.748404 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 2 13:25:53.748413 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 13:25:53.748422 kernel: NX (Execute Disable) protection: active Mar 2 13:25:53.748431 kernel: APIC: Static calls initialized Mar 2 13:25:53.748444 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Mar 2 13:25:53.748454 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Mar 2 13:25:53.748463 kernel: extended physical RAM map: Mar 2 13:25:53.748473 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 13:25:53.748482 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 13:25:53.748491 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 13:25:53.748500 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 13:25:53.748510 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 13:25:53.748519 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 2 13:25:53.748528 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 2 13:25:53.748537 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Mar 2 13:25:53.748550 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Mar 2 13:25:53.748563 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Mar 2 13:25:53.748574 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Mar 2 13:25:53.748583 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Mar 2 13:25:53.748593 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 2 13:25:53.748605 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 2 13:25:53.748615 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 2 13:25:53.748625 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 2 13:25:53.748635 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 13:25:53.748645 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 2 13:25:53.748654 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 2 13:25:53.748664 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 2 13:25:53.748674 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 2 13:25:53.748683 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 2 13:25:53.748693 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 13:25:53.748703 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 2 13:25:53.748719 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 13:25:53.748833 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 2 13:25:53.748844 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 13:25:53.748900 kernel: efi: EFI v2.7 by EDK II Mar 2 13:25:53.748915 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Mar 2 13:25:53.750858 kernel: random: crng init done Mar 2 13:25:53.750875 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 2 13:25:53.750996 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 2 13:25:53.751008 kernel: secureboot: Secure boot disabled Mar 2 13:25:53.751018 kernel: SMBIOS 2.8 present. Mar 2 13:25:53.751028 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 2 13:25:53.751047 kernel: DMI: Memory slots populated: 1/1 Mar 2 13:25:53.751057 kernel: Hypervisor detected: KVM Mar 2 13:25:53.751068 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 2 13:25:53.751079 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 13:25:53.751089 kernel: kvm-clock: using sched offset of 65624543091 cycles Mar 2 13:25:53.751102 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 13:25:53.751112 kernel: tsc: Detected 2445.426 MHz processor Mar 2 13:25:53.751124 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 13:25:53.751134 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 13:25:53.751144 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 2 13:25:53.751154 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 2 13:25:53.751169 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 13:25:53.751179 kernel: Using GB pages for direct mapping Mar 2 13:25:53.751189 kernel: ACPI: Early table checksum verification disabled Mar 2 13:25:53.751199 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 2 13:25:53.751209 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 2 13:25:53.751220 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:25:53.751230 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:25:53.751240 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 2 13:25:53.751256 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:25:53.751266 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:25:53.751276 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:25:53.751287 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:25:53.751297 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 2 13:25:53.751309 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 2 13:25:53.751319 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 2 13:25:53.751329 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 2 13:25:53.751339 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 2 13:25:53.751353 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 2 13:25:53.751363 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 2 13:25:53.751373 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 2 13:25:53.751384 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 2 13:25:53.751394 kernel: No NUMA configuration found Mar 2 13:25:53.751406 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 2 13:25:53.751419 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Mar 2 13:25:53.751429 kernel: Zone ranges: Mar 2 13:25:53.751438 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 13:25:53.751453 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 2 13:25:53.751463 kernel: Normal empty Mar 2 13:25:53.751476 kernel: Device empty Mar 2 13:25:53.751487 kernel: Movable zone start for each node Mar 2 13:25:53.751497 kernel: Early memory node ranges Mar 2 13:25:53.751506 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 2 13:25:53.751571 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 2 13:25:53.751583 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 2 13:25:53.751593 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 2 13:25:53.751610 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 2 13:25:53.751621 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 2 13:25:53.751630 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Mar 2 13:25:53.751639 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Mar 2 13:25:53.751649 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 2 13:25:53.751713 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:25:53.751838 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 2 13:25:53.751854 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 2 13:25:53.751865 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:25:53.751875 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 2 13:25:53.751886 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 2 13:25:53.751896 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 2 13:25:53.751910 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 2 13:25:53.751922 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 2 13:25:53.752004 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 13:25:53.752015 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 13:25:53.752026 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 13:25:53.752043 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 13:25:53.752054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 13:25:53.752065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 13:25:53.752076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 13:25:53.752086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 13:25:53.752098 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 13:25:53.752108 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 13:25:53.752119 kernel: TSC deadline timer available Mar 2 13:25:53.752129 kernel: CPU topo: Max. logical packages: 1 Mar 2 13:25:53.752143 kernel: CPU topo: Max. logical dies: 1 Mar 2 13:25:53.752154 kernel: CPU topo: Max. dies per package: 1 Mar 2 13:25:53.752165 kernel: CPU topo: Max. threads per core: 1 Mar 2 13:25:53.752176 kernel: CPU topo: Num. cores per package: 4 Mar 2 13:25:53.752187 kernel: CPU topo: Num. threads per package: 4 Mar 2 13:25:53.752198 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 2 13:25:53.752209 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 13:25:53.752219 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 13:25:53.752230 kernel: kvm-guest: setup PV sched yield Mar 2 13:25:53.752244 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 2 13:25:53.752255 kernel: Booting paravirtualized kernel on KVM Mar 2 13:25:53.752266 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 13:25:53.752276 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 13:25:53.752286 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 2 13:25:53.752297 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 2 13:25:53.752307 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 13:25:53.752317 kernel: kvm-guest: PV spinlocks enabled Mar 2 13:25:53.752328 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 13:25:53.752401 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 13:25:53.752414 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:25:53.752425 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:25:53.752436 kernel: Fallback order for Node 0: 0 Mar 2 13:25:53.752446 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Mar 2 13:25:53.752457 kernel: Policy zone: DMA32 Mar 2 13:25:53.752468 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:25:53.752478 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 13:25:53.752493 kernel: ftrace: allocating 40099 entries in 157 pages Mar 2 13:25:53.752503 kernel: ftrace: allocated 157 pages with 5 groups Mar 2 13:25:53.752513 kernel: Dynamic Preempt: voluntary Mar 2 13:25:53.752523 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:25:53.752535 kernel: rcu: RCU event tracing is enabled. Mar 2 13:25:53.752546 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 13:25:53.752556 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:25:53.752567 kernel: Rude variant of Tasks RCU enabled. Mar 2 13:25:53.752577 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:25:53.752590 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:25:53.752606 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 13:25:53.752668 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:25:53.752681 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:25:53.752692 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:25:53.752702 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 13:25:53.752713 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:25:53.752723 kernel: Console: colour dummy device 80x25 Mar 2 13:25:53.752832 kernel: printk: legacy console [ttyS0] enabled Mar 2 13:25:53.752849 kernel: ACPI: Core revision 20240827 Mar 2 13:25:53.752860 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 13:25:53.752871 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 13:25:53.752882 kernel: x2apic enabled Mar 2 13:25:53.752893 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 13:25:53.752904 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 13:25:53.752915 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 13:25:53.752987 kernel: kvm-guest: setup PV IPIs Mar 2 13:25:53.753003 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 13:25:53.753019 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 2 13:25:53.753029 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 2 13:25:53.753041 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 13:25:53.753052 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 13:25:53.753062 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 13:25:53.753072 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 13:25:53.753083 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 13:25:53.753093 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 13:25:53.753104 kernel: Speculative Store Bypass: Vulnerable Mar 2 13:25:53.753120 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 13:25:53.753132 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 13:25:53.753195 kernel: active return thunk: srso_alias_return_thunk Mar 2 13:25:53.753207 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 13:25:53.753218 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 13:25:53.753229 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 13:25:53.753239 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 13:25:53.753250 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 13:25:53.753261 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 13:25:53.753276 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 13:25:53.753287 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 13:25:53.753297 kernel: Freeing SMP alternatives memory: 32K Mar 2 13:25:53.753308 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:25:53.753318 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 2 13:25:53.753330 kernel: landlock: Up and running. Mar 2 13:25:53.753341 kernel: SELinux: Initializing. Mar 2 13:25:53.753351 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:25:53.753362 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:25:53.753376 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 13:25:53.753387 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 13:25:53.753397 kernel: signal: max sigframe size: 1776 Mar 2 13:25:53.753408 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:25:53.753419 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:25:53.753430 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 2 13:25:53.753440 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 13:25:53.753450 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:25:53.753464 kernel: smpboot: x86: Booting SMP configuration: Mar 2 13:25:53.753474 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 13:25:53.753484 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 13:25:53.753495 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 2 13:25:53.753559 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46192K init, 2568K bss, 145388K reserved, 0K cma-reserved) Mar 2 13:25:53.753571 kernel: devtmpfs: initialized Mar 2 13:25:53.753582 kernel: x86/mm: Memory block size: 128MB Mar 2 13:25:53.753593 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 2 13:25:53.753604 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 2 13:25:53.753619 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 2 13:25:53.753630 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 2 13:25:53.753641 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Mar 2 13:25:53.753653 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 2 13:25:53.753667 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:25:53.753677 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 13:25:53.753687 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:25:53.753697 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:25:53.753707 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:25:53.753828 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:25:53.753845 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 13:25:53.753856 kernel: audit: type=2000 audit(1772457911.912:1): state=initialized audit_enabled=0 res=1 Mar 2 13:25:53.753866 kernel: cpuidle: using governor menu Mar 2 13:25:53.753876 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:25:53.753888 kernel: dca service started, version 1.12.1 Mar 2 13:25:53.753901 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 2 13:25:53.753913 kernel: PCI: Using configuration type 1 for base access Mar 2 13:25:53.753923 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 13:25:53.754011 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:25:53.754022 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:25:53.754032 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:25:53.754043 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:25:53.754055 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:25:53.754067 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:25:53.754079 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:25:53.754089 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:25:53.754101 kernel: ACPI: Interpreter enabled Mar 2 13:25:53.754118 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 13:25:53.754131 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 13:25:53.754144 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 13:25:53.754154 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 13:25:53.754164 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 13:25:53.754177 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 13:25:53.755547 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 13:25:53.755912 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 13:25:53.762699 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 13:25:53.763651 kernel: PCI host bridge to bus 0000:00 Mar 2 13:25:53.764439 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 13:25:53.764641 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 13:25:53.766091 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 13:25:53.766281 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 2 13:25:53.766568 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 2 13:25:53.767025 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 2 13:25:53.767222 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 13:25:53.767871 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 2 13:25:53.770287 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 2 13:25:53.770525 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Mar 2 13:25:53.770872 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Mar 2 13:25:53.771270 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 2 13:25:53.782229 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 13:25:53.782611 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 72265 usecs Mar 2 13:25:53.789719 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 2 13:25:53.790411 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Mar 2 13:25:53.790630 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Mar 2 13:25:53.796865 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Mar 2 13:25:53.797531 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 2 13:25:53.797863 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Mar 2 13:25:53.798506 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Mar 2 13:25:53.798711 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Mar 2 13:25:53.799299 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 2 13:25:53.799503 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Mar 2 13:25:53.799717 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Mar 2 13:25:53.800209 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 2 13:25:53.800406 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Mar 2 13:25:53.800860 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 2 13:25:53.801232 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 13:25:53.801576 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 2 13:25:53.802071 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Mar 2 13:25:53.802283 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Mar 2 13:25:53.802551 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 2 13:25:53.802860 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Mar 2 13:25:53.802882 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 13:25:53.804019 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 13:25:53.804034 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 13:25:53.804048 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 13:25:53.804061 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 13:25:53.804080 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 13:25:53.804093 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 13:25:53.804105 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 13:25:53.804116 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 13:25:53.804130 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 13:25:53.804142 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 13:25:53.804154 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 13:25:53.804167 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 13:25:53.804179 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 13:25:53.804197 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 13:25:53.804210 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 13:25:53.804221 kernel: iommu: Default domain type: Translated Mar 2 13:25:53.804234 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 13:25:53.804245 kernel: efivars: Registered efivars operations Mar 2 13:25:53.805065 kernel: PCI: Using ACPI for IRQ routing Mar 2 13:25:53.805078 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 13:25:53.805090 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 2 13:25:53.805102 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 2 13:25:53.805120 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Mar 2 13:25:53.805130 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Mar 2 13:25:53.805191 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 2 13:25:53.805206 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 2 13:25:53.805217 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Mar 2 13:25:53.805228 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 2 13:25:53.805450 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 13:25:53.805656 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 13:25:53.809037 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 13:25:53.809059 kernel: vgaarb: loaded Mar 2 13:25:53.809078 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 13:25:53.809090 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 13:25:53.809102 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 13:25:53.809114 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:25:53.809126 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:25:53.809138 kernel: pnp: PnP ACPI init Mar 2 13:25:53.809696 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 2 13:25:53.809717 kernel: pnp: PnP ACPI: found 6 devices Mar 2 13:25:53.809847 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 13:25:53.809862 kernel: NET: Registered PF_INET protocol family Mar 2 13:25:53.809876 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:25:53.809886 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:25:53.809896 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:25:53.812080 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:25:53.812104 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:25:53.812117 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:25:53.812129 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:25:53.812145 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:25:53.812289 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:25:53.812309 kernel: NET: Registered PF_XDP protocol family Mar 2 13:25:53.812537 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Mar 2 13:25:53.812908 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Mar 2 13:25:53.817593 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 13:25:53.817910 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 13:25:53.818166 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 13:25:53.818350 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 2 13:25:53.818517 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 2 13:25:53.818689 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 2 13:25:53.818705 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:25:53.818718 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 2 13:25:53.818846 kernel: Initialise system trusted keyrings Mar 2 13:25:53.818861 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:25:53.818875 kernel: Key type asymmetric registered Mar 2 13:25:53.818892 kernel: Asymmetric key parser 'x509' registered Mar 2 13:25:53.818905 kernel: hrtimer: interrupt took 7760238 ns Mar 2 13:25:53.818917 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 2 13:25:53.821128 kernel: io scheduler mq-deadline registered Mar 2 13:25:53.821146 kernel: io scheduler kyber registered Mar 2 13:25:53.821158 kernel: io scheduler bfq registered Mar 2 13:25:53.821170 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 13:25:53.821183 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 13:25:53.821195 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 13:25:53.821214 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 13:25:53.821225 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:25:53.821237 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 13:25:53.821249 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 13:25:53.821260 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 13:25:53.821272 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 13:25:53.821924 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 13:25:53.823460 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 2 13:25:53.823671 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 13:25:53.829240 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T13:25:46 UTC (1772457946) Mar 2 13:25:53.829456 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 2 13:25:53.829479 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 13:25:53.829491 kernel: efifb: probing for efifb Mar 2 13:25:53.829502 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 2 13:25:53.829528 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 2 13:25:53.829539 kernel: efifb: scrolling: redraw Mar 2 13:25:53.829549 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 2 13:25:53.829561 kernel: Console: switching to colour frame buffer device 160x50 Mar 2 13:25:53.829574 kernel: fb0: EFI VGA frame buffer device Mar 2 13:25:53.829588 kernel: pstore: Using crash dump compression: deflate Mar 2 13:25:53.829599 kernel: pstore: Registered efi_pstore as persistent store backend Mar 2 13:25:53.829610 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:25:53.829621 kernel: Segment Routing with IPv6 Mar 2 13:25:53.829641 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:25:53.829656 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:25:53.829667 kernel: Key type dns_resolver registered Mar 2 13:25:53.829680 kernel: IPI shorthand broadcast: enabled Mar 2 13:25:53.829693 kernel: sched_clock: Marking stable (35425674245, 3970039642)->(43759051495, -4363337608) Mar 2 13:25:53.829704 kernel: registered taskstats version 1 Mar 2 13:25:53.829714 kernel: Loading compiled-in X.509 certificates Mar 2 13:25:53.829832 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: ca052fea375a75b056ebd4154b64794dffb70b96' Mar 2 13:25:53.829850 kernel: Demotion targets for Node 0: null Mar 2 13:25:53.829867 kernel: Key type .fscrypt registered Mar 2 13:25:53.829877 kernel: Key type fscrypt-provisioning registered Mar 2 13:25:53.829888 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:25:53.829902 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:25:53.829912 kernel: ima: No architecture policies found Mar 2 13:25:53.829923 kernel: clk: Disabling unused clocks Mar 2 13:25:53.831566 kernel: Warning: unable to open an initial console. Mar 2 13:25:53.831582 kernel: Freeing unused kernel image (initmem) memory: 46192K Mar 2 13:25:53.831592 kernel: Write protecting the kernel read-only data: 40960k Mar 2 13:25:53.831609 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 2 13:25:53.831620 kernel: Run /init as init process Mar 2 13:25:53.831630 kernel: with arguments: Mar 2 13:25:53.831643 kernel: /init Mar 2 13:25:53.831653 kernel: with environment: Mar 2 13:25:53.831665 kernel: HOME=/ Mar 2 13:25:53.831677 kernel: TERM=linux Mar 2 13:25:53.831692 systemd[1]: Successfully made /usr/ read-only. Mar 2 13:25:53.831711 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 2 13:25:53.831723 systemd[1]: Detected virtualization kvm. Mar 2 13:25:53.832074 systemd[1]: Detected architecture x86-64. Mar 2 13:25:53.832086 systemd[1]: Running in initrd. Mar 2 13:25:53.832098 systemd[1]: No hostname configured, using default hostname. Mar 2 13:25:53.832111 systemd[1]: Hostname set to . Mar 2 13:25:53.832123 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:25:53.832141 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:25:53.832156 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:25:53.832168 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:25:53.832182 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:25:53.832194 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:25:53.832206 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:25:53.832218 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:25:53.832236 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:25:53.832247 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:25:53.832259 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:25:53.832270 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:25:53.832281 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:25:53.832292 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:25:53.832302 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:25:53.832313 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:25:53.832323 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:25:53.832336 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:25:53.832347 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:25:53.832357 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 2 13:25:53.832368 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:25:53.832378 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:25:53.832389 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:25:53.832399 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:25:53.832409 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:25:53.832422 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:25:53.832432 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:25:53.832443 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 2 13:25:53.832453 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:25:53.832464 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:25:53.832474 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:25:53.832484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:25:53.832494 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:25:53.844070 systemd-journald[204]: Collecting audit messages is disabled. Mar 2 13:25:53.844237 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:25:53.844252 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:25:53.844265 systemd-journald[204]: Journal started Mar 2 13:25:53.844288 systemd-journald[204]: Runtime Journal (/run/log/journal/bf2bc809686046df9550f3a47fa5ed53) is 6M, max 48.1M, 42.1M free. Mar 2 13:25:53.763407 systemd-modules-load[205]: Inserted module 'overlay' Mar 2 13:25:53.882523 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:25:53.924595 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:25:53.983320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:25:54.054228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:25:54.094479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:25:54.159201 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 2 13:25:54.209688 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:25:54.292076 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:25:54.377521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:25:54.441690 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:25:54.520443 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:25:54.687850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:25:54.861449 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:25:54.901347 kernel: Bridge firewalling registered Mar 2 13:25:54.904233 systemd-modules-load[205]: Inserted module 'br_netfilter' Mar 2 13:25:54.915530 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:25:54.985250 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:25:55.082347 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 13:25:55.323340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:25:55.360661 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:25:55.753306 systemd-resolved[276]: Positive Trust Anchors: Mar 2 13:25:55.754279 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:25:55.767150 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:25:55.810567 systemd-resolved[276]: Defaulting to hostname 'linux'. Mar 2 13:25:55.845921 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:25:55.915345 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:25:56.577378 kernel: SCSI subsystem initialized Mar 2 13:25:56.606207 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:25:56.689068 kernel: iscsi: registered transport (tcp) Mar 2 13:25:56.790695 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:25:56.790876 kernel: QLogic iSCSI HBA Driver Mar 2 13:25:57.095011 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:25:57.252656 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:25:57.321720 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:25:58.710408 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:25:58.763543 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:25:59.153030 kernel: raid6: avx2x4 gen() 5497 MB/s Mar 2 13:25:59.176889 kernel: raid6: avx2x2 gen() 9763 MB/s Mar 2 13:25:59.208133 kernel: raid6: avx2x1 gen() 5039 MB/s Mar 2 13:25:59.208218 kernel: raid6: using algorithm avx2x2 gen() 9763 MB/s Mar 2 13:25:59.253225 kernel: raid6: .... xor() 5346 MB/s, rmw enabled Mar 2 13:25:59.253305 kernel: raid6: using avx2x2 recovery algorithm Mar 2 13:25:59.357558 kernel: xor: automatically using best checksumming function avx Mar 2 13:26:01.160393 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:26:01.248063 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:26:01.284594 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:26:01.477914 systemd-udevd[454]: Using default interface naming scheme 'v255'. Mar 2 13:26:01.514620 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:26:01.586458 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:26:01.874723 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Mar 2 13:26:02.093618 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:26:02.154444 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:26:02.527575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:26:02.553106 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:26:02.899018 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:26:02.899240 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:26:02.965682 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:26:03.006066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:26:03.030340 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 13:26:03.148360 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:26:03.148652 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:26:03.226930 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 13:26:03.242317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:26:03.389116 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 13:26:03.389570 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 13:26:03.458525 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 13:26:03.485561 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 13:26:03.485638 kernel: GPT:9289727 != 19775487 Mar 2 13:26:03.485654 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 13:26:03.492532 kernel: GPT:9289727 != 19775487 Mar 2 13:26:03.501411 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 13:26:03.501493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:26:03.560330 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:26:03.730088 kernel: libata version 3.00 loaded. Mar 2 13:26:03.894466 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 13:26:03.894539 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 13:26:03.911847 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 13:26:03.963805 kernel: AES CTR mode by8 optimization enabled Mar 2 13:26:04.005894 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 13:26:04.091106 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 2 13:26:04.091423 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 2 13:26:04.091647 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 13:26:04.192657 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 13:26:04.323077 kernel: scsi host0: ahci Mar 2 13:26:04.323451 kernel: scsi host1: ahci Mar 2 13:26:04.325223 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 13:26:04.391676 kernel: scsi host2: ahci Mar 2 13:26:04.460633 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 13:26:04.817509 kernel: scsi host3: ahci Mar 2 13:26:04.818275 kernel: scsi host4: ahci Mar 2 13:26:04.818643 kernel: scsi host5: ahci Mar 2 13:26:04.822365 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 1 Mar 2 13:26:04.822392 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 1 Mar 2 13:26:04.822422 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 1 Mar 2 13:26:04.822437 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 1 Mar 2 13:26:04.822451 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 1 Mar 2 13:26:04.822470 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 1 Mar 2 13:26:04.562626 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:26:04.584438 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:26:04.922842 disk-uuid[611]: Primary Header is updated. Mar 2 13:26:04.922842 disk-uuid[611]: Secondary Entries is updated. Mar 2 13:26:04.922842 disk-uuid[611]: Secondary Header is updated. Mar 2 13:26:04.990593 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:26:05.018129 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 13:26:05.018210 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:26:05.018231 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 13:26:05.066900 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 13:26:05.066961 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 13:26:05.108233 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 13:26:05.108309 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 13:26:05.138723 kernel: ata3.00: LPM support broken, forcing max_power Mar 2 13:26:05.139297 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 13:26:05.139323 kernel: ata3.00: applying bridge limits Mar 2 13:26:05.152531 kernel: ata3.00: LPM support broken, forcing max_power Mar 2 13:26:05.173189 kernel: ata3.00: configured for UDMA/100 Mar 2 13:26:05.203404 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 13:26:05.774189 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 13:26:05.779587 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:26:05.816357 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 13:26:06.116679 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:26:06.153624 disk-uuid[612]: The operation has completed successfully. Mar 2 13:26:07.206918 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:26:07.207149 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:26:07.280660 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:26:07.303227 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:26:07.444331 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:26:07.444401 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:26:07.462414 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:26:07.647394 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:26:07.974158 sh[642]: Success Mar 2 13:26:08.025493 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:26:08.304548 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:26:08.306479 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:26:08.366685 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 2 13:26:08.734438 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 2 13:26:09.284387 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:26:09.329688 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:26:09.398494 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:26:09.475974 kernel: BTRFS: device fsid 760529e6-8e55-47fc-ad5a-c1c1d184e50a devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (661) Mar 2 13:26:09.497962 kernel: BTRFS info (device dm-0): first mount of filesystem 760529e6-8e55-47fc-ad5a-c1c1d184e50a Mar 2 13:26:09.510855 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:26:09.615135 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 2 13:26:09.615513 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 2 13:26:09.629426 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:26:09.693197 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 2 13:26:09.745602 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:26:09.798167 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:26:09.889719 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:26:10.225196 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (690) Mar 2 13:26:10.272494 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 13:26:10.305516 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:26:10.373487 kernel: BTRFS info (device vda6): turning on async discard Mar 2 13:26:10.373565 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 13:26:10.440249 kernel: BTRFS info (device vda6): last unmount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 13:26:10.483142 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:26:10.495090 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:26:15.227446 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:26:15.649194 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:26:16.202921 ignition[747]: Ignition 2.22.0 Mar 2 13:26:16.203079 ignition[747]: Stage: fetch-offline Mar 2 13:26:16.203309 ignition[747]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:26:16.203324 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:26:16.249935 systemd-networkd[836]: lo: Link UP Mar 2 13:26:16.203616 ignition[747]: parsed url from cmdline: "" Mar 2 13:26:16.249944 systemd-networkd[836]: lo: Gained carrier Mar 2 13:26:16.203625 ignition[747]: no config URL provided Mar 2 13:26:16.266431 systemd-networkd[836]: Enumeration completed Mar 2 13:26:16.203700 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:26:16.266985 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:26:16.203718 ignition[747]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:26:16.273325 systemd-networkd[836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:26:16.203958 ignition[747]: op(1): [started] loading QEMU firmware config module Mar 2 13:26:16.273332 systemd-networkd[836]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:26:16.203967 ignition[747]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 13:26:16.283562 systemd-networkd[836]: eth0: Link UP Mar 2 13:26:16.487398 ignition[747]: op(1): [finished] loading QEMU firmware config module Mar 2 13:26:16.293901 systemd-networkd[836]: eth0: Gained carrier Mar 2 13:26:16.293927 systemd-networkd[836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:26:16.347314 systemd[1]: Reached target network.target - Network. Mar 2 13:26:16.591277 systemd-networkd[836]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:26:18.362700 systemd-networkd[836]: eth0: Gained IPv6LL Mar 2 13:26:19.963703 ignition[747]: parsing config with SHA512: e0168d7f72d5f6fd6a976d2511501be507bc81e3df7b7bf6321feffd1c66506256e4cc14f3bf062774ae8e0316e658f235d0183159e95651d8f4b605897be42e Mar 2 13:26:22.218657 unknown[747]: fetched base config from "system" Mar 2 13:26:22.222920 ignition[747]: fetch-offline: fetch-offline passed Mar 2 13:26:22.221370 unknown[747]: fetched user config from "qemu" Mar 2 13:26:22.245875 ignition[747]: Ignition finished successfully Mar 2 13:26:22.394602 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:26:22.567206 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 13:26:22.683540 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:26:25.473526 ignition[847]: Ignition 2.22.0 Mar 2 13:26:25.473616 ignition[847]: Stage: kargs Mar 2 13:26:25.476454 ignition[847]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:26:25.476515 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:26:25.483993 ignition[847]: kargs: kargs passed Mar 2 13:26:25.484196 ignition[847]: Ignition finished successfully Mar 2 13:26:25.654351 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:26:25.763564 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:26:27.076008 ignition[855]: Ignition 2.22.0 Mar 2 13:26:27.087984 ignition[855]: Stage: disks Mar 2 13:26:27.093471 ignition[855]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:26:27.093489 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:26:27.119964 ignition[855]: disks: disks passed Mar 2 13:26:27.124988 ignition[855]: Ignition finished successfully Mar 2 13:26:27.283243 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:26:27.352556 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:26:27.379710 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:26:27.497438 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:26:27.548687 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:26:27.676555 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:26:27.810422 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:26:28.312894 systemd-fsck[865]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 2 13:26:28.387376 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:26:28.490363 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:26:30.180579 kernel: EXT4-fs (vda9): mounted filesystem 9d55f1a4-66ad-43d6-b325-f6b8d2d08c3e r/w with ordered data mode. Quota mode: none. Mar 2 13:26:30.187717 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:26:30.222266 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:26:30.252013 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:26:30.409016 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:26:30.440719 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 13:26:30.440942 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:26:30.440990 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:26:30.596416 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:26:30.695390 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:26:30.867171 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (874) Mar 2 13:26:30.867220 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 13:26:30.867236 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:26:30.983965 kernel: BTRFS info (device vda6): turning on async discard Mar 2 13:26:30.984052 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 13:26:31.035699 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:26:31.372980 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:26:31.470520 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:26:31.583971 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:26:31.745219 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:26:36.297523 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:26:36.384476 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:26:36.489190 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:26:36.783418 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:26:36.866693 kernel: BTRFS info (device vda6): last unmount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 13:26:37.604170 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:26:38.427406 ignition[987]: INFO : Ignition 2.22.0 Mar 2 13:26:38.427406 ignition[987]: INFO : Stage: mount Mar 2 13:26:38.504292 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:26:38.504292 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:26:38.504292 ignition[987]: INFO : mount: mount passed Mar 2 13:26:38.504292 ignition[987]: INFO : Ignition finished successfully Mar 2 13:26:38.589256 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:26:38.678551 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:26:39.196600 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:26:39.776581 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1001) Mar 2 13:26:39.858305 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 13:26:39.858386 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:26:40.043384 kernel: BTRFS info (device vda6): turning on async discard Mar 2 13:26:40.043467 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 13:26:40.072004 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:26:41.172054 ignition[1019]: INFO : Ignition 2.22.0 Mar 2 13:26:41.211987 ignition[1019]: INFO : Stage: files Mar 2 13:26:41.271700 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:26:41.344068 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:26:41.344068 ignition[1019]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:26:41.344068 ignition[1019]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:26:41.344068 ignition[1019]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:26:41.600903 ignition[1019]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:26:41.634516 ignition[1019]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:26:41.697955 ignition[1019]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:26:41.689599 unknown[1019]: wrote ssh authorized keys file for user: core Mar 2 13:26:41.764640 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:26:41.764640 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 13:26:42.023048 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:26:46.299908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:26:46.299908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 13:26:46.430655 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 2 13:26:46.832845 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 2 13:26:51.916478 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1154151785 wd_nsec: 1154151883 Mar 2 13:26:54.478986 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:26:54.520908 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 2 13:26:55.438360 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 2 13:26:58.238948 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:26:58.238948 ignition[1019]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 2 13:26:58.451229 ignition[1019]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:26:58.742997 ignition[1019]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:26:58.742997 ignition[1019]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 2 13:26:58.742997 ignition[1019]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 2 13:26:58.855021 ignition[1019]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:26:58.855021 ignition[1019]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:26:58.855021 ignition[1019]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 2 13:26:58.855021 ignition[1019]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 13:26:59.435941 ignition[1019]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:26:59.546467 ignition[1019]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:26:59.546467 ignition[1019]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 13:26:59.546467 ignition[1019]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:26:59.546467 ignition[1019]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:27:00.049492 ignition[1019]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:27:00.049492 ignition[1019]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:27:00.049492 ignition[1019]: INFO : files: files passed Mar 2 13:27:00.049492 ignition[1019]: INFO : Ignition finished successfully Mar 2 13:26:59.607645 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:26:59.652521 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:26:59.896287 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:27:00.684597 initrd-setup-root-after-ignition[1046]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 13:27:00.777152 initrd-setup-root-after-ignition[1053]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:27:00.836099 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:27:00.836099 initrd-setup-root-after-ignition[1049]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:27:01.022418 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:27:01.035335 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:27:01.202273 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:27:01.244529 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:27:01.328439 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:27:02.314537 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:27:02.351350 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:27:02.472433 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:27:02.555646 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:27:02.769571 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:27:02.799497 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:27:03.403008 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:27:03.603056 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:27:04.019324 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:27:04.087574 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:27:04.310637 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:27:04.334626 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:27:04.348350 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:27:04.517000 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:27:04.627392 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:27:04.654036 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:27:04.654310 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:27:04.654452 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:27:04.654575 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 2 13:27:04.654709 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:27:04.655003 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:27:04.655145 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:27:04.656665 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:27:04.656968 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:27:04.657068 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:27:04.658712 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:27:04.664548 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:27:04.911646 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:27:05.368980 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:27:05.579925 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:27:05.712118 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:27:05.718146 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:27:05.916280 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:27:05.951395 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:27:06.194611 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:27:06.222075 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:27:06.227494 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:27:06.350632 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:27:06.496055 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:27:06.597356 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:27:06.601076 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:27:06.723874 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:27:06.724102 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:27:06.778638 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:27:06.779050 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:27:07.062912 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:27:07.073346 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:27:07.194460 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:27:07.358134 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:27:07.450064 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:27:07.450559 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:27:07.513133 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:27:07.513428 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:27:07.724375 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:27:07.724550 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:27:07.986528 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:27:08.188458 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:27:08.188707 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:27:08.471135 ignition[1075]: INFO : Ignition 2.22.0 Mar 2 13:27:08.500497 ignition[1075]: INFO : Stage: umount Mar 2 13:27:08.500497 ignition[1075]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:27:08.500497 ignition[1075]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:27:08.953612 ignition[1075]: INFO : umount: umount passed Mar 2 13:27:08.991088 ignition[1075]: INFO : Ignition finished successfully Mar 2 13:27:09.010439 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:27:09.028056 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:27:09.101606 systemd[1]: Stopped target network.target - Network. Mar 2 13:27:09.269343 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:27:09.274280 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:27:09.418600 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:27:09.419015 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:27:09.561292 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:27:09.561495 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:27:09.676346 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:27:09.676580 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:27:09.988999 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:27:10.000120 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:27:10.197869 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:27:10.272397 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:27:10.463662 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:27:10.471181 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:27:10.665606 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 2 13:27:10.670896 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:27:10.671157 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:27:10.989904 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 2 13:27:11.056637 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 2 13:27:11.169320 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:27:11.169574 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:27:11.252723 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:27:11.337186 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:27:11.338075 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:27:11.512365 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:27:11.512516 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:27:11.756662 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:27:11.757026 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:27:11.785123 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:27:11.785356 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:27:11.920978 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:27:12.183969 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 2 13:27:12.184160 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 2 13:27:12.188175 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:27:12.191615 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:27:12.459051 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:27:12.459983 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:27:12.619504 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:27:12.619709 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:27:12.694615 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:27:12.694878 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:27:12.870942 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:27:12.871050 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:27:13.111538 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:27:13.111968 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:27:13.330015 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:27:13.429119 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 2 13:27:13.429417 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:27:13.868041 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:27:13.951445 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:27:14.118705 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 13:27:14.119085 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:27:14.459008 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:27:14.459387 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:27:14.474455 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:27:14.474656 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:27:14.877982 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 2 13:27:14.888681 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 2 13:27:14.888952 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 2 13:27:14.889122 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 13:27:14.894471 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:27:14.894651 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:27:14.962139 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:27:14.963192 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:27:15.211513 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:27:15.573382 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:27:16.287717 systemd[1]: Switching root. Mar 2 13:27:16.890963 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Mar 2 13:27:16.899514 systemd-journald[204]: Journal stopped Mar 2 13:27:37.497524 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:27:37.498165 kernel: SELinux: policy capability open_perms=1 Mar 2 13:27:37.498195 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:27:37.498216 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:27:37.498235 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:27:37.498268 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:27:37.498365 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:27:37.498397 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:27:37.498418 kernel: SELinux: policy capability userspace_initial_context=0 Mar 2 13:27:37.498446 kernel: audit: type=1403 audit(1772458038.306:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:27:37.498469 systemd[1]: Successfully loaded SELinux policy in 643.039ms. Mar 2 13:27:37.498505 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 161.609ms. Mar 2 13:27:37.498534 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 2 13:27:37.498556 systemd[1]: Detected virtualization kvm. Mar 2 13:27:37.498575 systemd[1]: Detected architecture x86-64. Mar 2 13:27:37.498596 systemd[1]: Detected first boot. Mar 2 13:27:37.498624 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:27:37.498643 zram_generator::config[1120]: No configuration found. Mar 2 13:27:37.498670 kernel: Guest personality initialized and is inactive Mar 2 13:27:37.498691 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 2 13:27:37.498713 kernel: Initialized host personality Mar 2 13:27:37.498872 kernel: NET: Registered PF_VSOCK protocol family Mar 2 13:27:37.498899 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:27:37.498921 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 2 13:27:37.498941 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 13:27:37.498961 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 13:27:37.498988 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 13:27:37.499009 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:27:37.499028 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:27:37.499049 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:27:37.499069 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:27:37.499088 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:27:37.499108 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:27:37.499129 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:27:37.499148 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:27:37.499174 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:27:37.499195 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:27:37.499213 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:27:37.499233 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:27:37.499254 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:27:37.499573 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:27:37.499606 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 13:27:37.499630 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:27:37.499646 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:27:37.499662 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 13:27:37.499680 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 13:27:37.499698 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 13:27:37.499717 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:27:37.499868 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:27:37.499889 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:27:37.499906 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:27:37.499929 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:27:37.499949 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:27:37.499966 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:27:37.499984 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 2 13:27:37.500002 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:27:37.500022 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:27:37.500041 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:27:37.500062 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:27:37.500082 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:27:37.500104 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:27:37.500129 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:27:37.500153 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:27:37.500172 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:27:37.500192 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:27:37.500212 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:27:37.500234 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:27:37.500254 systemd[1]: Reached target machines.target - Containers. Mar 2 13:27:37.500363 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:27:37.500400 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:27:37.500419 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:27:37.500437 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:27:37.500455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:27:37.500475 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:27:37.500494 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:27:37.500514 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:27:37.500533 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:27:37.500559 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:27:37.501191 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 13:27:37.501220 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 13:27:37.501242 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 13:27:37.501263 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 13:27:37.501371 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 13:27:37.501400 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:27:37.501422 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:27:37.501440 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:27:37.501467 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:27:37.501571 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 2 13:27:37.501602 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:27:37.501624 kernel: ACPI: bus type drm_connector registered Mar 2 13:27:37.501644 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 13:27:37.501663 systemd[1]: Stopped verity-setup.service. Mar 2 13:27:37.501894 systemd-journald[1206]: Collecting audit messages is disabled. Mar 2 13:27:37.502015 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:27:37.502050 systemd-journald[1206]: Journal started Mar 2 13:27:37.502084 systemd-journald[1206]: Runtime Journal (/run/log/journal/bf2bc809686046df9550f3a47fa5ed53) is 6M, max 48.1M, 42.1M free. Mar 2 13:27:31.386365 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:27:31.460530 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 13:27:31.467163 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 13:27:31.468346 systemd[1]: systemd-journald.service: Consumed 3.859s CPU time. Mar 2 13:27:37.564003 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:27:37.583405 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:27:37.601658 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:27:37.622873 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:27:37.644364 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:27:37.669707 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:27:37.687088 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:27:37.715004 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:27:37.757140 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:27:37.796508 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:27:37.797161 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:27:37.856016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:27:37.856991 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:27:37.890134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:27:37.892948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:27:37.935272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:27:37.979509 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:27:38.018274 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:27:38.048358 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 2 13:27:38.066550 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:27:38.102968 kernel: fuse: init (API version 7.41) Mar 2 13:27:38.135424 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:27:38.139014 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:27:38.176121 kernel: loop: module loaded Mar 2 13:27:38.163644 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:27:38.171057 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:27:38.216535 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:27:38.218115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:27:38.261108 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:27:38.287441 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:27:38.313643 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:27:38.350905 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:27:38.350981 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:27:38.385074 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 2 13:27:38.423142 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:27:38.455177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:27:38.471587 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:27:38.519048 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:27:38.615950 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:27:38.626954 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:27:38.660253 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:27:38.670020 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:27:38.721019 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:27:38.746911 systemd-journald[1206]: Time spent on flushing to /var/log/journal/bf2bc809686046df9550f3a47fa5ed53 is 52.607ms for 1074 entries. Mar 2 13:27:38.746911 systemd-journald[1206]: System Journal (/var/log/journal/bf2bc809686046df9550f3a47fa5ed53) is 8M, max 195.6M, 187.6M free. Mar 2 13:27:38.898166 systemd-journald[1206]: Received client request to flush runtime journal. Mar 2 13:27:38.766472 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:27:38.819716 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:27:38.842516 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:27:38.914020 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:27:39.001706 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:27:39.045632 kernel: loop0: detected capacity change from 0 to 110984 Mar 2 13:27:39.069672 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:27:39.083220 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Mar 2 13:27:39.083247 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Mar 2 13:27:39.110652 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:27:39.163684 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 2 13:27:39.193041 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:27:39.257127 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:27:39.331548 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:27:39.570692 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:27:39.589254 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 2 13:27:39.659888 kernel: loop1: detected capacity change from 0 to 228704 Mar 2 13:27:39.683235 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:27:39.740670 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:27:40.693829 kernel: loop2: detected capacity change from 0 to 128560 Mar 2 13:27:40.962246 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 2 13:27:40.962275 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 2 13:27:41.006668 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:27:41.871459 kernel: loop3: detected capacity change from 0 to 110984 Mar 2 13:27:42.569586 kernel: loop4: detected capacity change from 0 to 228704 Mar 2 13:27:42.737110 kernel: loop5: detected capacity change from 0 to 128560 Mar 2 13:27:43.205966 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 13:27:43.243557 (sd-merge)[1267]: Merged extensions into '/usr'. Mar 2 13:27:43.966212 systemd[1]: Reload requested from client PID 1241 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:27:43.966403 systemd[1]: Reloading... Mar 2 13:27:45.647902 zram_generator::config[1293]: No configuration found. Mar 2 13:27:46.719114 ldconfig[1236]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:27:47.371664 systemd[1]: Reloading finished in 3394 ms. Mar 2 13:27:47.487997 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:27:47.588553 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:27:47.655898 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:27:47.770175 systemd[1]: Starting ensure-sysext.service... Mar 2 13:27:47.818902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:27:47.889870 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:27:47.990300 systemd[1]: Reload requested from client PID 1331 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:27:47.990420 systemd[1]: Reloading... Mar 2 13:27:48.053007 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 2 13:27:48.053162 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 2 13:27:48.054248 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:27:48.055088 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:27:48.060005 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:27:48.060687 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Mar 2 13:27:48.061001 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Mar 2 13:27:48.107459 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:27:48.107479 systemd-tmpfiles[1332]: Skipping /boot Mar 2 13:27:48.244710 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:27:48.249870 systemd-tmpfiles[1332]: Skipping /boot Mar 2 13:27:48.253716 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Mar 2 13:27:50.203618 zram_generator::config[1365]: No configuration found. Mar 2 13:27:52.352198 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 2 13:27:52.534248 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:27:52.566173 kernel: ACPI: button: Power Button [PWRF] Mar 2 13:27:52.632670 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 13:27:52.646998 systemd[1]: Reloading finished in 4655 ms. Mar 2 13:27:52.709603 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:27:52.912007 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:27:53.473900 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 2 13:27:53.547072 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 13:27:53.578134 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 13:27:53.732032 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:27:54.112091 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:27:54.130494 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 13:27:54.188955 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:27:54.231586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:27:54.250238 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:27:54.496296 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:27:55.867701 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:27:55.923626 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:27:55.964510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:27:55.967241 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:27:56.008083 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 13:27:56.038180 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:27:56.963530 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:27:57.524723 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:27:57.658569 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:27:57.740713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:27:57.779110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:27:57.825611 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:27:57.827000 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:27:58.171716 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:27:58.364098 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:27:58.504511 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:27:58.505096 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:27:58.609543 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:27:58.611714 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:27:59.056039 systemd[1]: Finished ensure-sysext.service. Mar 2 13:27:59.109682 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:27:59.564055 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:27:59.679956 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:28:00.070665 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:28:00.071127 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:28:00.170121 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 13:28:00.259632 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:28:00.314296 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:28:00.959027 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:28:01.096210 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:28:02.287073 augenrules[1491]: No rules Mar 2 13:28:02.330575 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:28:02.357343 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 13:28:02.388287 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:28:04.110571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:28:04.699993 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:28:07.523998 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 13:28:07.604078 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:28:07.622153 systemd-resolved[1455]: Positive Trust Anchors: Mar 2 13:28:07.625189 systemd-resolved[1455]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:28:07.625235 systemd-resolved[1455]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:28:07.675571 systemd-networkd[1454]: lo: Link UP Mar 2 13:28:07.677705 systemd-networkd[1454]: lo: Gained carrier Mar 2 13:28:07.730248 systemd-networkd[1454]: Enumeration completed Mar 2 13:28:07.732343 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:28:07.785556 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:28:07.791148 systemd-networkd[1454]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:28:07.908294 systemd-networkd[1454]: eth0: Link UP Mar 2 13:28:07.909944 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 2 13:28:07.975529 systemd-networkd[1454]: eth0: Gained carrier Mar 2 13:28:07.975973 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:28:08.022127 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:28:08.201925 systemd-resolved[1455]: Defaulting to hostname 'linux'. Mar 2 13:28:08.217166 systemd-networkd[1454]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:28:08.229467 systemd-timesyncd[1482]: Network configuration changed, trying to establish connection. Mar 2 13:28:08.238199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:28:08.899300 systemd-timesyncd[1482]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 13:28:08.908123 systemd-resolved[1455]: Clock change detected. Flushing caches. Mar 2 13:28:08.910708 systemd-timesyncd[1482]: Initial clock synchronization to Mon 2026-03-02 13:28:08.879987 UTC. Mar 2 13:28:08.925526 systemd[1]: Reached target network.target - Network. Mar 2 13:28:09.069393 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:28:09.160104 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:28:09.257925 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:28:09.356165 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:28:09.430943 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 2 13:28:09.456855 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:28:09.497732 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:28:09.532844 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:28:09.565785 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:28:09.565922 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:28:09.601007 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:28:09.667817 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:28:09.770322 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:28:09.977819 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 2 13:28:10.028411 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 2 13:28:10.172506 systemd-networkd[1454]: eth0: Gained IPv6LL Mar 2 13:28:10.201494 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 2 13:28:10.448907 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:28:10.524472 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 2 13:28:10.614533 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 2 13:28:10.825333 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:28:10.912787 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:28:10.997474 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:28:11.048378 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:28:11.123092 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:28:11.175065 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:28:11.175126 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:28:11.273416 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:28:11.460007 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 13:28:11.579397 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:28:11.719084 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:28:11.821800 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:28:11.887529 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:28:11.924025 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:28:11.948307 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 2 13:28:11.968737 jq[1521]: false Mar 2 13:28:11.991756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:28:12.032948 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:28:12.355941 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:28:12.618985 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 13:28:12.919052 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Refreshing passwd entry cache Mar 2 13:28:12.917701 oslogin_cache_refresh[1523]: Refreshing passwd entry cache Mar 2 13:28:12.958059 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:28:13.102431 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Failure getting users, quitting Mar 2 13:28:13.073502 oslogin_cache_refresh[1523]: Failure getting users, quitting Mar 2 13:28:13.223806 extend-filesystems[1522]: Found /dev/vda6 Mar 2 13:28:13.221907 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:28:13.234972 oslogin_cache_refresh[1523]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 2 13:28:13.558054 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 2 13:28:13.558054 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Refreshing group entry cache Mar 2 13:28:13.235528 oslogin_cache_refresh[1523]: Refreshing group entry cache Mar 2 13:28:13.753444 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Failure getting groups, quitting Mar 2 13:28:13.761869 oslogin_cache_refresh[1523]: Failure getting groups, quitting Mar 2 13:28:13.762065 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 2 13:28:13.763518 oslogin_cache_refresh[1523]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 2 13:28:13.922907 extend-filesystems[1522]: Found /dev/vda9 Mar 2 13:28:14.039744 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:28:14.104963 extend-filesystems[1522]: Checking size of /dev/vda9 Mar 2 13:28:14.157076 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:28:14.169079 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:28:14.201353 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:28:15.401923 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:28:15.848445 jq[1548]: true Mar 2 13:28:16.581367 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:28:16.671214 extend-filesystems[1522]: Resized partition /dev/vda9 Mar 2 13:28:16.651102 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:28:16.853759 update_engine[1545]: I20260302 13:28:16.831017 1545 main.cc:92] Flatcar Update Engine starting Mar 2 13:28:16.657994 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:28:16.659121 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 2 13:28:16.659825 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 2 13:28:16.808533 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:28:16.809214 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:28:16.917805 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:28:17.186014 extend-filesystems[1557]: resize2fs 1.47.3 (8-Jul-2025) Mar 2 13:28:17.422020 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 13:28:17.426883 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:28:17.427860 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:28:18.857065 (ntainerd)[1564]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:28:19.405024 jq[1563]: true Mar 2 13:28:19.511124 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 13:28:19.512849 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 13:28:19.601443 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:28:19.636180 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:28:20.459196 tar[1562]: linux-amd64/LICENSE Mar 2 13:28:20.459196 tar[1562]: linux-amd64/helm Mar 2 13:28:20.500493 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 13:28:20.673924 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 13:28:20.673924 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 13:28:20.673924 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 13:28:21.248199 extend-filesystems[1522]: Resized filesystem in /dev/vda9 Mar 2 13:28:21.697532 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:28:21.698940 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:28:21.874177 dbus-daemon[1519]: [system] SELinux support is enabled Mar 2 13:28:21.903098 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:28:21.926697 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:28:21.926756 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:28:21.948988 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:28:21.949035 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:28:22.740033 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:28:22.741867 update_engine[1545]: I20260302 13:28:22.741021 1545 update_check_scheduler.cc:74] Next update check in 4m16s Mar 2 13:28:22.811226 systemd-logind[1541]: Watching system buttons on /dev/input/event2 (Power Button) Mar 2 13:28:23.248753 systemd-logind[1541]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 13:28:23.264481 systemd-logind[1541]: New seat seat0. Mar 2 13:28:23.315107 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:28:23.365819 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:28:26.249884 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:28:26.329403 bash[1602]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:28:26.320190 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:28:26.472532 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 13:28:30.272999 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:28:30.534168 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:28:30.667772 systemd[1]: Started sshd@0-10.0.0.75:22-10.0.0.1:52380.service - OpenSSH per-connection server daemon (10.0.0.1:52380). Mar 2 13:28:32.943054 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:28:32.951270 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:28:33.086911 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:28:34.656123 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:28:34.706186 locksmithd[1593]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:28:34.747944 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:28:34.810080 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 13:28:34.878419 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:28:38.147541 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 52380 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:28:38.200141 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:28:39.679148 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:28:39.825958 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:28:40.460939 kernel: kvm_amd: TSC scaling supported Mar 2 13:28:40.462812 kernel: kvm_amd: Nested Virtualization enabled Mar 2 13:28:40.463075 kernel: kvm_amd: Nested Paging enabled Mar 2 13:28:40.679950 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 13:28:40.755932 kernel: kvm_amd: PMU virtualization is disabled Mar 2 13:28:41.184125 systemd-logind[1541]: New session 1 of user core. Mar 2 13:28:42.339506 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:28:42.364196 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:28:42.638021 (systemd)[1638]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:28:42.690855 systemd-logind[1541]: New session c1 of user core. Mar 2 13:28:42.908722 containerd[1564]: time="2026-03-02T13:28:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 2 13:28:42.937423 containerd[1564]: time="2026-03-02T13:28:42.926089018Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 2 13:28:43.150169 containerd[1564]: time="2026-03-02T13:28:43.147387413Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="85.565µs" Mar 2 13:28:43.150169 containerd[1564]: time="2026-03-02T13:28:43.148147146Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 2 13:28:43.150169 containerd[1564]: time="2026-03-02T13:28:43.148185771Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 2 13:28:43.150169 containerd[1564]: time="2026-03-02T13:28:43.148969930Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 2 13:28:43.150169 containerd[1564]: time="2026-03-02T13:28:43.148998349Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 2 13:28:43.150169 containerd[1564]: time="2026-03-02T13:28:43.149119057Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 2 13:28:43.150169 containerd[1564]: time="2026-03-02T13:28:43.149303176Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 2 13:28:43.150169 containerd[1564]: time="2026-03-02T13:28:43.149323860Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 2 13:28:43.150169 containerd[1564]: time="2026-03-02T13:28:43.150144662Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 2 13:28:43.150169 containerd[1564]: time="2026-03-02T13:28:43.150169407Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 2 13:28:43.150169 containerd[1564]: time="2026-03-02T13:28:43.150189401Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 2 13:28:43.150727 containerd[1564]: time="2026-03-02T13:28:43.150202109Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 2 13:28:43.150727 containerd[1564]: time="2026-03-02T13:28:43.150439192Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 2 13:28:43.155404 containerd[1564]: time="2026-03-02T13:28:43.152019097Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 2 13:28:43.155404 containerd[1564]: time="2026-03-02T13:28:43.152065978Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 2 13:28:43.155404 containerd[1564]: time="2026-03-02T13:28:43.152080147Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 2 13:28:43.155404 containerd[1564]: time="2026-03-02T13:28:43.153268527Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 2 13:28:43.155404 containerd[1564]: time="2026-03-02T13:28:43.154455157Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 2 13:28:43.155404 containerd[1564]: time="2026-03-02T13:28:43.154815631Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:28:43.251178 containerd[1564]: time="2026-03-02T13:28:43.248482918Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 2 13:28:43.251178 containerd[1564]: time="2026-03-02T13:28:43.250337361Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.253940948Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.253973649Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.253992241Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.254006740Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.254097970Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.254191010Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.254209332Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.254222370Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.254235128Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.254251369Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.254864056Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.254981663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.255006378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 2 13:28:43.255454 containerd[1564]: time="2026-03-02T13:28:43.255023679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 2 13:28:43.256256 containerd[1564]: time="2026-03-02T13:28:43.255037639Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 2 13:28:43.256256 containerd[1564]: time="2026-03-02T13:28:43.255050757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 2 13:28:43.256256 containerd[1564]: time="2026-03-02T13:28:43.255066058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 2 13:28:43.256256 containerd[1564]: time="2026-03-02T13:28:43.255081157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 2 13:28:43.256256 containerd[1564]: time="2026-03-02T13:28:43.255095006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 2 13:28:43.256256 containerd[1564]: time="2026-03-02T13:28:43.255127888Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 2 13:28:43.256256 containerd[1564]: time="2026-03-02T13:28:43.255149111Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 2 13:28:43.256256 containerd[1564]: time="2026-03-02T13:28:43.255534221Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 2 13:28:43.256256 containerd[1564]: time="2026-03-02T13:28:43.255883047Z" level=info msg="Start snapshots syncer" Mar 2 13:28:43.256256 containerd[1564]: time="2026-03-02T13:28:43.256084447Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 2 13:28:43.265925 containerd[1564]: time="2026-03-02T13:28:43.262929227Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 2 13:28:43.265925 containerd[1564]: time="2026-03-02T13:28:43.263434604Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 2 13:28:43.271429 containerd[1564]: time="2026-03-02T13:28:43.271372440Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 2 13:28:43.274171 containerd[1564]: time="2026-03-02T13:28:43.272206421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 2 13:28:43.274349 containerd[1564]: time="2026-03-02T13:28:43.274316560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 2 13:28:43.278105 containerd[1564]: time="2026-03-02T13:28:43.278062599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 2 13:28:43.280053 containerd[1564]: time="2026-03-02T13:28:43.280021960Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 2 13:28:43.280165 containerd[1564]: time="2026-03-02T13:28:43.280140576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 2 13:28:43.280251 containerd[1564]: time="2026-03-02T13:28:43.280231756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 2 13:28:43.280331 containerd[1564]: time="2026-03-02T13:28:43.280311938Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 2 13:28:43.289837 containerd[1564]: time="2026-03-02T13:28:43.289448674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 2 13:28:43.290425 containerd[1564]: time="2026-03-02T13:28:43.290397779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 2 13:28:43.290928 containerd[1564]: time="2026-03-02T13:28:43.290853035Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 2 13:28:43.307098 containerd[1564]: time="2026-03-02T13:28:43.302120453Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 2 13:28:43.307098 containerd[1564]: time="2026-03-02T13:28:43.302478053Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 2 13:28:43.307098 containerd[1564]: time="2026-03-02T13:28:43.302499707Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 2 13:28:43.307098 containerd[1564]: time="2026-03-02T13:28:43.302517138Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 2 13:28:43.307098 containerd[1564]: time="2026-03-02T13:28:43.302531078Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 2 13:28:43.307098 containerd[1564]: time="2026-03-02T13:28:43.302545437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 2 13:28:43.307098 containerd[1564]: time="2026-03-02T13:28:43.302940353Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 2 13:28:43.307098 containerd[1564]: time="2026-03-02T13:28:43.303162857Z" level=info msg="runtime interface created" Mar 2 13:28:43.307098 containerd[1564]: time="2026-03-02T13:28:43.303179158Z" level=info msg="created NRI interface" Mar 2 13:28:43.307098 containerd[1564]: time="2026-03-02T13:28:43.303199491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 2 13:28:43.307098 containerd[1564]: time="2026-03-02T13:28:43.303225597Z" level=info msg="Connect containerd service" Mar 2 13:28:43.307098 containerd[1564]: time="2026-03-02T13:28:43.303368981Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:28:43.329911 containerd[1564]: time="2026-03-02T13:28:43.329541587Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:28:45.167748 systemd[1638]: Queued start job for default target default.target. Mar 2 13:28:45.854876 systemd[1638]: Created slice app.slice - User Application Slice. Mar 2 13:28:45.855017 systemd[1638]: Reached target paths.target - Paths. Mar 2 13:28:45.855103 systemd[1638]: Reached target timers.target - Timers. Mar 2 13:28:45.873151 systemd[1638]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:28:46.810503 tar[1562]: linux-amd64/README.md Mar 2 13:28:46.803347 systemd[1638]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:28:46.808928 systemd[1638]: Reached target sockets.target - Sockets. Mar 2 13:28:46.809098 systemd[1638]: Reached target basic.target - Basic System. Mar 2 13:28:46.809181 systemd[1638]: Reached target default.target - Main User Target. Mar 2 13:28:46.809403 systemd[1638]: Startup finished in 4.061s. Mar 2 13:28:46.825423 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:28:46.934370 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:28:48.035248 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 13:28:48.520872 systemd[1]: Started sshd@1-10.0.0.75:22-10.0.0.1:41178.service - OpenSSH per-connection server daemon (10.0.0.1:41178). Mar 2 13:28:49.739484 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 41178 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:28:49.765084 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:28:52.740142 systemd-logind[1541]: New session 2 of user core. Mar 2 13:28:53.275251 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:28:53.294761 containerd[1564]: time="2026-03-02T13:28:53.289294471Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:28:53.294761 containerd[1564]: time="2026-03-02T13:28:53.289911919Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:28:53.294761 containerd[1564]: time="2026-03-02T13:28:53.290368390Z" level=info msg="Start subscribing containerd event" Mar 2 13:28:53.316214 containerd[1564]: time="2026-03-02T13:28:53.294346436Z" level=info msg="Start recovering state" Mar 2 13:28:53.329838 containerd[1564]: time="2026-03-02T13:28:53.317438209Z" level=info msg="Start event monitor" Mar 2 13:28:53.336488 containerd[1564]: time="2026-03-02T13:28:53.336437567Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:28:53.336839 containerd[1564]: time="2026-03-02T13:28:53.336817123Z" level=info msg="Start streaming server" Mar 2 13:28:53.338430 containerd[1564]: time="2026-03-02T13:28:53.338397273Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 2 13:28:53.338839 containerd[1564]: time="2026-03-02T13:28:53.338810501Z" level=info msg="runtime interface starting up..." Mar 2 13:28:53.339296 containerd[1564]: time="2026-03-02T13:28:53.339001926Z" level=info msg="starting plugins..." Mar 2 13:28:53.342947 containerd[1564]: time="2026-03-02T13:28:53.342918367Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 2 13:28:53.343874 containerd[1564]: time="2026-03-02T13:28:53.343849661Z" level=info msg="containerd successfully booted in 10.448800s" Mar 2 13:28:53.346225 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:28:53.604962 systemd-udevd[1334]: cpu0: Worker [1385] processing SEQNUM=1716 is taking a long time Mar 2 13:28:53.604987 systemd-udevd[1334]: cpu3: Worker [1382] processing SEQNUM=1719 is taking a long time Mar 2 13:28:53.764859 systemd-udevd[1334]: cpu2: Worker [1396] processing SEQNUM=1718 is taking a long time Mar 2 13:28:53.764887 systemd-udevd[1334]: cpu1: Worker [1381] processing SEQNUM=1717 is taking a long time Mar 2 13:28:54.129025 sshd[1674]: Connection closed by 10.0.0.1 port 41178 Mar 2 13:28:54.138832 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Mar 2 13:28:54.940387 systemd[1]: sshd@1-10.0.0.75:22-10.0.0.1:41178.service: Deactivated successfully. Mar 2 13:28:54.995938 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 13:28:55.015444 systemd-logind[1541]: Session 2 logged out. Waiting for processes to exit. Mar 2 13:28:55.059340 systemd[1]: Started sshd@2-10.0.0.75:22-10.0.0.1:56152.service - OpenSSH per-connection server daemon (10.0.0.1:56152). Mar 2 13:28:55.110145 systemd-logind[1541]: Removed session 2. Mar 2 13:28:56.694474 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 56152 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:28:56.727230 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:28:57.369182 systemd-logind[1541]: New session 3 of user core. Mar 2 13:28:57.427110 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:28:58.048830 kernel: EDAC MC: Ver: 3.0.0 Mar 2 13:28:58.045440 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Mar 2 13:28:58.058890 sshd[1683]: Connection closed by 10.0.0.1 port 56152 Mar 2 13:28:58.665273 systemd[1]: sshd@2-10.0.0.75:22-10.0.0.1:56152.service: Deactivated successfully. Mar 2 13:28:58.707349 systemd-logind[1541]: Session 3 logged out. Waiting for processes to exit. Mar 2 13:28:58.714379 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 13:28:58.738227 systemd-logind[1541]: Removed session 3. Mar 2 13:29:05.097004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:29:05.098154 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:29:05.100328 systemd[1]: Startup finished in 37.233s (kernel) + 1min 28.533s (initrd) + 1min 46.821s (userspace) = 3min 52.588s. Mar 2 13:29:05.245902 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:29:07.526235 update_engine[1545]: I20260302 13:29:07.524472 1545 update_attempter.cc:509] Updating boot flags... Mar 2 13:29:08.302163 systemd[1]: Started sshd@3-10.0.0.75:22-10.0.0.1:51386.service - OpenSSH per-connection server daemon (10.0.0.1:51386). Mar 2 13:29:08.792743 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 51386 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:29:08.796190 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:29:09.059393 systemd-logind[1541]: New session 4 of user core. Mar 2 13:29:09.183440 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:29:13.133325 sshd[1722]: Connection closed by 10.0.0.1 port 51386 Mar 2 13:29:13.198825 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Mar 2 13:29:19.566425 systemd[1]: sshd@3-10.0.0.75:22-10.0.0.1:51386.service: Deactivated successfully. Mar 2 13:29:19.606869 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:29:19.631332 systemd-logind[1541]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:29:19.668234 systemd[1]: Started sshd@4-10.0.0.75:22-10.0.0.1:52378.service - OpenSSH per-connection server daemon (10.0.0.1:52378). Mar 2 13:29:19.728403 systemd-logind[1541]: Removed session 4. Mar 2 13:29:20.793905 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 52378 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:29:20.807441 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:29:21.212331 systemd-logind[1541]: New session 5 of user core. Mar 2 13:29:21.234391 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:29:21.242182 kubelet[1694]: E0302 13:29:21.240696 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:29:21.254974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:29:21.255422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:29:21.260119 systemd[1]: kubelet.service: Consumed 12.128s CPU time, 271.4M memory peak. Mar 2 13:29:21.416500 sshd[1732]: Connection closed by 10.0.0.1 port 52378 Mar 2 13:29:21.427742 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Mar 2 13:29:21.493764 systemd[1]: sshd@4-10.0.0.75:22-10.0.0.1:52378.service: Deactivated successfully. Mar 2 13:29:21.501038 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:29:21.519199 systemd-logind[1541]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:29:21.535404 systemd[1]: Started sshd@5-10.0.0.75:22-10.0.0.1:49408.service - OpenSSH per-connection server daemon (10.0.0.1:49408). Mar 2 13:29:21.555006 systemd-logind[1541]: Removed session 5. Mar 2 13:29:21.945846 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 49408 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:29:21.959275 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:29:22.009032 systemd-logind[1541]: New session 6 of user core. Mar 2 13:29:22.020014 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:29:22.923833 sshd[1741]: Connection closed by 10.0.0.1 port 49408 Mar 2 13:29:22.928404 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Mar 2 13:29:22.956330 systemd[1]: Started sshd@6-10.0.0.75:22-10.0.0.1:49418.service - OpenSSH per-connection server daemon (10.0.0.1:49418). Mar 2 13:29:22.980862 systemd[1]: sshd@5-10.0.0.75:22-10.0.0.1:49408.service: Deactivated successfully. Mar 2 13:29:23.001832 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:29:23.009876 systemd-logind[1541]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:29:23.019900 systemd-logind[1541]: Removed session 6. Mar 2 13:29:23.495952 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 49418 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:29:23.502442 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:29:23.595677 systemd-logind[1541]: New session 7 of user core. Mar 2 13:29:23.735229 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:29:23.999327 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 13:29:24.001922 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:29:24.100791 sudo[1751]: pam_unix(sudo:session): session closed for user root Mar 2 13:29:24.128297 sshd[1750]: Connection closed by 10.0.0.1 port 49418 Mar 2 13:29:24.129857 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Mar 2 13:29:24.186429 systemd[1]: sshd@6-10.0.0.75:22-10.0.0.1:49418.service: Deactivated successfully. Mar 2 13:29:24.194817 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:29:24.204292 systemd-logind[1541]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:29:24.222691 systemd[1]: Started sshd@7-10.0.0.75:22-10.0.0.1:49432.service - OpenSSH per-connection server daemon (10.0.0.1:49432). Mar 2 13:29:24.248260 systemd-logind[1541]: Removed session 7. Mar 2 13:29:24.554917 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 49432 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:29:24.563813 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:29:24.634804 systemd-logind[1541]: New session 8 of user core. Mar 2 13:29:24.683470 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:29:24.786026 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 13:29:24.791834 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:29:24.852694 sudo[1762]: pam_unix(sudo:session): session closed for user root Mar 2 13:29:24.947682 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 2 13:29:24.951358 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:29:25.037762 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 13:29:25.534160 augenrules[1784]: No rules Mar 2 13:29:25.529194 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:29:25.532111 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 13:29:25.548171 sudo[1761]: pam_unix(sudo:session): session closed for user root Mar 2 13:29:25.573402 sshd[1760]: Connection closed by 10.0.0.1 port 49432 Mar 2 13:29:25.577409 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Mar 2 13:29:25.673538 systemd[1]: sshd@7-10.0.0.75:22-10.0.0.1:49432.service: Deactivated successfully. Mar 2 13:29:25.678153 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:29:25.692005 systemd-logind[1541]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:29:25.729823 systemd[1]: Started sshd@8-10.0.0.75:22-10.0.0.1:49436.service - OpenSSH per-connection server daemon (10.0.0.1:49436). Mar 2 13:29:25.754283 systemd-logind[1541]: Removed session 8. Mar 2 13:29:25.995277 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 49436 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:29:26.000203 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:29:26.056703 systemd-logind[1541]: New session 9 of user core. Mar 2 13:29:26.085306 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:29:26.157722 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:29:26.158312 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:29:31.508044 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:29:31.682682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:29:32.064196 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 13:29:32.136234 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 13:29:34.087439 dockerd[1821]: time="2026-03-02T13:29:34.084822560Z" level=info msg="Starting up" Mar 2 13:29:34.087439 dockerd[1821]: time="2026-03-02T13:29:34.086331108Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 2 13:29:34.313757 dockerd[1821]: time="2026-03-02T13:29:34.312066752Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 2 13:29:34.548538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:29:34.574805 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:29:34.792128 dockerd[1821]: time="2026-03-02T13:29:34.791921018Z" level=info msg="Loading containers: start." Mar 2 13:29:34.851495 kernel: Initializing XFRM netlink socket Mar 2 13:29:34.931889 kubelet[1849]: E0302 13:29:34.931273 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:29:34.943874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:29:34.944132 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:29:34.944893 systemd[1]: kubelet.service: Consumed 1.266s CPU time, 110.5M memory peak. Mar 2 13:29:42.004179 systemd-networkd[1454]: docker0: Link UP Mar 2 13:29:42.057281 dockerd[1821]: time="2026-03-02T13:29:42.056747928Z" level=info msg="Loading containers: done." Mar 2 13:29:42.486828 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2756920600-merged.mount: Deactivated successfully. Mar 2 13:29:42.542929 dockerd[1821]: time="2026-03-02T13:29:42.539288923Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 13:29:42.581540 dockerd[1821]: time="2026-03-02T13:29:42.544461039Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 2 13:29:42.613241 dockerd[1821]: time="2026-03-02T13:29:42.610752674Z" level=info msg="Initializing buildkit" Mar 2 13:29:43.485006 dockerd[1821]: time="2026-03-02T13:29:43.482371770Z" level=info msg="Completed buildkit initialization" Mar 2 13:29:43.517207 dockerd[1821]: time="2026-03-02T13:29:43.511510732Z" level=info msg="Daemon has completed initialization" Mar 2 13:29:43.515270 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 13:29:43.520901 dockerd[1821]: time="2026-03-02T13:29:43.518661922Z" level=info msg="API listen on /run/docker.sock" Mar 2 13:29:45.119407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 13:29:45.186240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:29:50.078048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:29:50.175116 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:29:54.415038 kubelet[2056]: E0302 13:29:54.410915 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:29:54.440193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:29:54.440461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:29:54.456269 systemd[1]: kubelet.service: Consumed 2.669s CPU time, 110.8M memory peak. Mar 2 13:29:57.430363 containerd[1564]: time="2026-03-02T13:29:57.429148430Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 2 13:30:01.730408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084261107.mount: Deactivated successfully. Mar 2 13:30:04.590937 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 13:30:04.609461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:09.682062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:09.795125 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:30:11.638353 kubelet[2101]: E0302 13:30:11.635674 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:30:11.797360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:30:11.811407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:30:12.133768 systemd[1]: kubelet.service: Consumed 1.986s CPU time, 110.6M memory peak. Mar 2 13:30:21.925416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 2 13:30:21.956124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:30.131531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:30.385294 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:30:32.755921 kubelet[2151]: E0302 13:30:32.748469 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:30:32.805098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:30:32.805532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:30:32.813124 systemd[1]: kubelet.service: Consumed 2.524s CPU time, 109.7M memory peak. Mar 2 13:30:38.126933 containerd[1564]: time="2026-03-02T13:30:38.115990867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:38.151352 containerd[1564]: time="2026-03-02T13:30:38.144843224Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 2 13:30:38.196721 containerd[1564]: time="2026-03-02T13:30:38.193203204Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:38.269508 containerd[1564]: time="2026-03-02T13:30:38.269434269Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 40.839895931s" Mar 2 13:30:38.273766 containerd[1564]: time="2026-03-02T13:30:38.271742657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:38.280314 containerd[1564]: time="2026-03-02T13:30:38.280253528Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 2 13:30:38.299840 containerd[1564]: time="2026-03-02T13:30:38.295006503Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 2 13:30:42.848325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 2 13:30:42.906499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:48.201234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:48.402406 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:30:50.265527 kubelet[2173]: E0302 13:30:50.263361 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:30:50.294232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:30:50.294537 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:30:50.311192 systemd[1]: kubelet.service: Consumed 1.848s CPU time, 110.2M memory peak. Mar 2 13:30:57.137906 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 3440691935 wd_nsec: 3440691676 Mar 2 13:31:00.501214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 2 13:31:00.569270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:31:06.503724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:31:07.799866 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:31:12.619967 kubelet[2190]: E0302 13:31:12.615424 2190 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:31:12.781448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:31:12.786415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:31:12.811963 systemd[1]: kubelet.service: Consumed 3.999s CPU time, 110.4M memory peak. Mar 2 13:31:15.778907 containerd[1564]: time="2026-03-02T13:31:15.775846747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:15.808741 containerd[1564]: time="2026-03-02T13:31:15.808666640Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 2 13:31:15.820838 containerd[1564]: time="2026-03-02T13:31:15.820741564Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:15.919324 containerd[1564]: time="2026-03-02T13:31:15.901242989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:15.947765 containerd[1564]: time="2026-03-02T13:31:15.947324206Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 37.647983773s" Mar 2 13:31:15.953475 containerd[1564]: time="2026-03-02T13:31:15.948012244Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 2 13:31:16.127745 containerd[1564]: time="2026-03-02T13:31:16.124456036Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 2 13:31:22.964206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 2 13:31:23.061547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:31:25.919690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:31:25.977863 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:31:28.508169 kubelet[2210]: E0302 13:31:28.503459 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:31:28.548104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:31:28.548464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:31:28.599714 systemd[1]: kubelet.service: Consumed 1.639s CPU time, 110.5M memory peak. Mar 2 13:31:36.485520 containerd[1564]: time="2026-03-02T13:31:36.483373405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:36.501773 containerd[1564]: time="2026-03-02T13:31:36.497790477Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 2 13:31:36.510946 containerd[1564]: time="2026-03-02T13:31:36.507469668Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:36.547094 containerd[1564]: time="2026-03-02T13:31:36.545093876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:36.707406 containerd[1564]: time="2026-03-02T13:31:36.664043747Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 20.539086174s" Mar 2 13:31:36.768500 containerd[1564]: time="2026-03-02T13:31:36.719307722Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 2 13:31:36.804313 containerd[1564]: time="2026-03-02T13:31:36.801896607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 2 13:31:38.593891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 2 13:31:38.610261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:31:43.145943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:31:43.301451 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:31:43.866407 kubelet[2232]: E0302 13:31:43.865828 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:31:43.878540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:31:43.879029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:31:43.882895 systemd[1]: kubelet.service: Consumed 1.301s CPU time, 108.8M memory peak. Mar 2 13:31:52.788190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1326080417.mount: Deactivated successfully. Mar 2 13:31:54.154336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 2 13:31:54.270034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:31:58.848482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:31:58.977491 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:32:03.210150 kubelet[2252]: E0302 13:32:03.207250 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:32:03.266078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:32:03.266748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:32:03.270871 systemd[1]: kubelet.service: Consumed 5.230s CPU time, 110.7M memory peak. Mar 2 13:32:10.223233 containerd[1564]: time="2026-03-02T13:32:10.217356629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:10.351764 containerd[1564]: time="2026-03-02T13:32:10.231158512Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 2 13:32:10.351764 containerd[1564]: time="2026-03-02T13:32:10.341661100Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:10.432192 containerd[1564]: time="2026-03-02T13:32:10.423389017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:10.449246 containerd[1564]: time="2026-03-02T13:32:10.435173617Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 33.633154733s" Mar 2 13:32:10.449246 containerd[1564]: time="2026-03-02T13:32:10.445030872Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 2 13:32:10.603721 containerd[1564]: time="2026-03-02T13:32:10.551546775Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 2 13:32:13.431009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 2 13:32:13.501921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:32:15.105032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088746870.mount: Deactivated successfully. Mar 2 13:32:17.297079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:32:17.450320 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:32:19.204139 kubelet[2280]: E0302 13:32:19.203257 2280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:32:19.255453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:32:19.264464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:32:19.313830 systemd[1]: kubelet.service: Consumed 1.604s CPU time, 110.2M memory peak. Mar 2 13:32:29.526321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 2 13:32:30.316728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:32:33.475288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:32:33.533075 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:32:35.294942 kubelet[2337]: E0302 13:32:35.289341 2337 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:32:35.317041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:32:35.317503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:32:35.329747 systemd[1]: kubelet.service: Consumed 1.591s CPU time, 110.7M memory peak. Mar 2 13:32:38.530897 update_engine[1545]: I20260302 13:32:38.527321 1545 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 2 13:32:38.530897 update_engine[1545]: I20260302 13:32:38.527891 1545 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 2 13:32:38.530897 update_engine[1545]: I20260302 13:32:38.528952 1545 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 2 13:32:38.532471 update_engine[1545]: I20260302 13:32:38.532413 1545 omaha_request_params.cc:62] Current group set to stable Mar 2 13:32:38.533759 update_engine[1545]: I20260302 13:32:38.533250 1545 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 2 13:32:38.533759 update_engine[1545]: I20260302 13:32:38.533277 1545 update_attempter.cc:643] Scheduling an action processor start. Mar 2 13:32:38.533759 update_engine[1545]: I20260302 13:32:38.533303 1545 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 2 13:32:38.533759 update_engine[1545]: I20260302 13:32:38.533341 1545 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 2 13:32:38.533759 update_engine[1545]: I20260302 13:32:38.533514 1545 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 2 13:32:38.533759 update_engine[1545]: I20260302 13:32:38.533531 1545 omaha_request_action.cc:272] Request: Mar 2 13:32:38.533759 update_engine[1545]: Mar 2 13:32:38.533759 update_engine[1545]: Mar 2 13:32:38.533759 update_engine[1545]: Mar 2 13:32:38.533759 update_engine[1545]: Mar 2 13:32:38.533759 update_engine[1545]: Mar 2 13:32:38.533759 update_engine[1545]: Mar 2 13:32:38.533759 update_engine[1545]: Mar 2 13:32:38.533759 update_engine[1545]: Mar 2 13:32:38.534765 update_engine[1545]: I20260302 13:32:38.534214 1545 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 13:32:38.542516 update_engine[1545]: I20260302 13:32:38.542470 1545 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 13:32:38.545121 update_engine[1545]: I20260302 13:32:38.545083 1545 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 13:32:38.567197 locksmithd[1593]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 2 13:32:38.571896 update_engine[1545]: E20260302 13:32:38.571161 1545 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 13:32:38.571896 update_engine[1545]: I20260302 13:32:38.571369 1545 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 2 13:32:43.821729 containerd[1564]: time="2026-03-02T13:32:43.820066051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:43.834210 containerd[1564]: time="2026-03-02T13:32:43.830148498Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 2 13:32:43.837326 containerd[1564]: time="2026-03-02T13:32:43.837075842Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:43.848121 containerd[1564]: time="2026-03-02T13:32:43.847144271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:43.851028 containerd[1564]: time="2026-03-02T13:32:43.850531051Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 33.295757227s" Mar 2 13:32:43.852955 containerd[1564]: time="2026-03-02T13:32:43.851215468Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 2 13:32:43.865761 containerd[1564]: time="2026-03-02T13:32:43.865284873Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 2 13:32:45.356856 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 2 13:32:45.392724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:32:46.521444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4164521240.mount: Deactivated successfully. Mar 2 13:32:47.095769 containerd[1564]: time="2026-03-02T13:32:47.085400726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:32:47.149870 containerd[1564]: time="2026-03-02T13:32:47.122321001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 2 13:32:47.156888 containerd[1564]: time="2026-03-02T13:32:47.154295067Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:32:47.222905 containerd[1564]: time="2026-03-02T13:32:47.222441513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:32:47.226138 containerd[1564]: time="2026-03-02T13:32:47.225870079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 3.360527886s" Mar 2 13:32:47.226138 containerd[1564]: time="2026-03-02T13:32:47.226123192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 2 13:32:47.241154 containerd[1564]: time="2026-03-02T13:32:47.240839863Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 2 13:32:47.778129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:32:47.904430 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:32:48.528820 update_engine[1545]: I20260302 13:32:48.523484 1545 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 13:32:48.528820 update_engine[1545]: I20260302 13:32:48.524464 1545 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 13:32:48.542403 update_engine[1545]: I20260302 13:32:48.540836 1545 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 13:32:48.559513 update_engine[1545]: E20260302 13:32:48.556336 1545 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 13:32:48.559513 update_engine[1545]: I20260302 13:32:48.556900 1545 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 2 13:32:49.120525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount27998167.mount: Deactivated successfully. Mar 2 13:32:49.283740 kubelet[2359]: E0302 13:32:49.283378 2359 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:32:49.295411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:32:49.295880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:32:49.296521 systemd[1]: kubelet.service: Consumed 1.247s CPU time, 110.1M memory peak. Mar 2 13:32:58.532306 update_engine[1545]: I20260302 13:32:58.530166 1545 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 13:32:58.532306 update_engine[1545]: I20260302 13:32:58.531178 1545 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 13:32:58.555135 update_engine[1545]: I20260302 13:32:58.553087 1545 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 13:32:58.708002 update_engine[1545]: E20260302 13:32:58.702211 1545 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 13:32:59.227850 update_engine[1545]: I20260302 13:32:58.811153 1545 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 2 13:33:01.020487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Mar 2 13:33:01.057041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:33:04.658474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:33:04.793521 (kubelet)[2429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:33:06.164362 kubelet[2429]: E0302 13:33:06.162114 2429 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:33:06.202948 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:33:06.203306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:33:06.214027 systemd[1]: kubelet.service: Consumed 1.395s CPU time, 108.6M memory peak. Mar 2 13:33:09.620719 update_engine[1545]: I20260302 13:33:09.535010 1545 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 13:33:09.620719 update_engine[1545]: I20260302 13:33:09.541236 1545 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 13:33:09.697499 update_engine[1545]: I20260302 13:33:09.655347 1545 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 13:33:09.697499 update_engine[1545]: E20260302 13:33:09.689252 1545 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 13:33:09.697499 update_engine[1545]: I20260302 13:33:09.696998 1545 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 2 13:33:09.699268 update_engine[1545]: I20260302 13:33:09.697527 1545 omaha_request_action.cc:617] Omaha request response: Mar 2 13:33:09.700035 update_engine[1545]: E20260302 13:33:09.699460 1545 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 2 13:33:09.702156 update_engine[1545]: I20260302 13:33:09.700409 1545 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 2 13:33:09.702156 update_engine[1545]: I20260302 13:33:09.700520 1545 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 2 13:33:09.702156 update_engine[1545]: I20260302 13:33:09.700762 1545 update_attempter.cc:306] Processing Done. Mar 2 13:33:09.702156 update_engine[1545]: E20260302 13:33:09.700944 1545 update_attempter.cc:619] Update failed. Mar 2 13:33:09.702156 update_engine[1545]: I20260302 13:33:09.701112 1545 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 2 13:33:09.702156 update_engine[1545]: I20260302 13:33:09.701126 1545 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 2 13:33:09.702156 update_engine[1545]: I20260302 13:33:09.701136 1545 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 2 13:33:09.702156 update_engine[1545]: I20260302 13:33:09.701327 1545 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 2 13:33:09.702156 update_engine[1545]: I20260302 13:33:09.701463 1545 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 2 13:33:09.702156 update_engine[1545]: I20260302 13:33:09.701478 1545 omaha_request_action.cc:272] Request: Mar 2 13:33:09.702156 update_engine[1545]: Mar 2 13:33:09.702156 update_engine[1545]: Mar 2 13:33:09.702156 update_engine[1545]: Mar 2 13:33:09.702156 update_engine[1545]: Mar 2 13:33:09.702156 update_engine[1545]: Mar 2 13:33:09.702156 update_engine[1545]: Mar 2 13:33:09.702156 update_engine[1545]: I20260302 13:33:09.701489 1545 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 13:33:09.702156 update_engine[1545]: I20260302 13:33:09.701529 1545 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 13:33:09.725010 update_engine[1545]: I20260302 13:33:09.719733 1545 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 13:33:09.725165 locksmithd[1593]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 2 13:33:09.748881 update_engine[1545]: E20260302 13:33:09.747120 1545 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 13:33:09.748881 update_engine[1545]: I20260302 13:33:09.747314 1545 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 2 13:33:09.748881 update_engine[1545]: I20260302 13:33:09.747339 1545 omaha_request_action.cc:617] Omaha request response: Mar 2 13:33:09.748881 update_engine[1545]: I20260302 13:33:09.747357 1545 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 2 13:33:09.748881 update_engine[1545]: I20260302 13:33:09.747373 1545 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 2 13:33:09.748881 update_engine[1545]: I20260302 13:33:09.747440 1545 update_attempter.cc:306] Processing Done. Mar 2 13:33:09.748881 update_engine[1545]: I20260302 13:33:09.747457 1545 update_attempter.cc:310] Error event sent. Mar 2 13:33:09.748881 update_engine[1545]: I20260302 13:33:09.747997 1545 update_check_scheduler.cc:74] Next update check in 46m58s Mar 2 13:33:09.760363 locksmithd[1593]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 2 13:33:12.957829 containerd[1564]: time="2026-03-02T13:33:12.957301953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:33:13.018735 containerd[1564]: time="2026-03-02T13:33:12.962113818Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 2 13:33:13.018735 containerd[1564]: time="2026-03-02T13:33:12.981053746Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:33:13.143419 containerd[1564]: time="2026-03-02T13:33:13.125490786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:33:13.183236 containerd[1564]: time="2026-03-02T13:33:13.159503310Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 25.918525273s" Mar 2 13:33:13.183236 containerd[1564]: time="2026-03-02T13:33:13.163272340Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 2 13:33:16.339785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Mar 2 13:33:16.358190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:33:17.659163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:33:17.742439 (kubelet)[2483]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:33:19.243810 kubelet[2483]: E0302 13:33:19.242456 2483 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:33:19.272782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:33:19.306035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:33:19.450855 systemd[1]: kubelet.service: Consumed 1.027s CPU time, 109.1M memory peak. Mar 2 13:33:29.357053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Mar 2 13:33:29.381381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:33:30.159197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:33:30.213072 (kubelet)[2500]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:33:30.538808 kubelet[2500]: E0302 13:33:30.534998 2500 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:33:30.550080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:33:30.550443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:33:30.551225 systemd[1]: kubelet.service: Consumed 523ms CPU time, 110.5M memory peak. Mar 2 13:33:32.161001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:33:32.161220 systemd[1]: kubelet.service: Consumed 523ms CPU time, 110.5M memory peak. Mar 2 13:33:32.178818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:33:32.400090 systemd[1]: Reload requested from client PID 2518 ('systemctl') (unit session-9.scope)... Mar 2 13:33:32.400650 systemd[1]: Reloading... Mar 2 13:33:32.734305 zram_generator::config[2557]: No configuration found. Mar 2 13:33:33.587142 systemd[1]: Reloading finished in 1180 ms. Mar 2 13:33:33.786902 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 13:33:33.787118 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 13:33:33.788880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:33:33.788947 systemd[1]: kubelet.service: Consumed 276ms CPU time, 98.4M memory peak. Mar 2 13:33:33.800036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:33:34.407047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:33:34.449964 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:33:34.840713 kubelet[2609]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:33:34.840713 kubelet[2609]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:33:34.840713 kubelet[2609]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:33:34.841514 kubelet[2609]: I0302 13:33:34.841143 2609 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:33:36.235753 kubelet[2609]: I0302 13:33:36.235079 2609 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 13:33:36.235753 kubelet[2609]: I0302 13:33:36.235179 2609 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:33:36.250320 kubelet[2609]: I0302 13:33:36.246801 2609 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:33:36.461735 kubelet[2609]: E0302 13:33:36.455872 2609 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:33:36.473173 kubelet[2609]: I0302 13:33:36.471180 2609 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:33:36.564894 kubelet[2609]: I0302 13:33:36.563214 2609 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 2 13:33:36.617786 kubelet[2609]: I0302 13:33:36.615066 2609 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 13:33:36.617786 kubelet[2609]: I0302 13:33:36.616407 2609 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:33:36.622114 kubelet[2609]: I0302 13:33:36.619216 2609 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:33:36.622114 kubelet[2609]: I0302 13:33:36.620162 2609 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:33:36.622114 kubelet[2609]: I0302 13:33:36.620178 2609 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 13:33:36.622114 kubelet[2609]: I0302 13:33:36.620367 2609 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:33:36.654097 kubelet[2609]: I0302 13:33:36.651120 2609 kubelet.go:480] "Attempting to sync node with API server" Mar 2 13:33:36.656071 kubelet[2609]: I0302 13:33:36.656044 2609 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:33:36.656295 kubelet[2609]: I0302 13:33:36.656201 2609 kubelet.go:386] "Adding apiserver pod source" Mar 2 13:33:36.658755 kubelet[2609]: E0302 13:33:36.658270 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:33:36.661045 kubelet[2609]: I0302 13:33:36.660927 2609 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:33:36.663247 kubelet[2609]: E0302 13:33:36.663204 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:33:36.673154 kubelet[2609]: I0302 13:33:36.672888 2609 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 2 13:33:36.675328 kubelet[2609]: I0302 13:33:36.673915 2609 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:33:36.682737 kubelet[2609]: W0302 13:33:36.677856 2609 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:33:36.700832 kubelet[2609]: I0302 13:33:36.700286 2609 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 13:33:36.705400 kubelet[2609]: I0302 13:33:36.702751 2609 server.go:1289] "Started kubelet" Mar 2 13:33:36.705400 kubelet[2609]: I0302 13:33:36.704295 2609 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:33:36.705993 kubelet[2609]: I0302 13:33:36.705946 2609 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:33:36.716707 kubelet[2609]: I0302 13:33:36.713269 2609 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:33:36.716707 kubelet[2609]: I0302 13:33:36.714395 2609 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:33:36.731394 kubelet[2609]: E0302 13:33:36.720721 2609 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899098a0293c3e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:33:36.700376039 +0000 UTC m=+2.134221581,LastTimestamp:2026-03-02 13:33:36.700376039 +0000 UTC m=+2.134221581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:33:36.731394 kubelet[2609]: E0302 13:33:36.731359 2609 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:33:36.745718 kubelet[2609]: I0302 13:33:36.743426 2609 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:33:36.745718 kubelet[2609]: I0302 13:33:36.744026 2609 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:33:36.748416 kubelet[2609]: I0302 13:33:36.748391 2609 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 13:33:36.757622 kubelet[2609]: E0302 13:33:36.749627 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:33:36.757768 kubelet[2609]: E0302 13:33:36.752261 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:33:36.757844 kubelet[2609]: I0302 13:33:36.756245 2609 reconciler.go:26] "Reconciler: start to sync state" Mar 2 13:33:36.759704 kubelet[2609]: E0302 13:33:36.756321 2609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="200ms" Mar 2 13:33:36.759885 kubelet[2609]: I0302 13:33:36.757822 2609 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 13:33:36.760157 kubelet[2609]: I0302 13:33:36.760137 2609 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:33:36.760323 kubelet[2609]: I0302 13:33:36.760302 2609 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:33:36.765003 kubelet[2609]: I0302 13:33:36.764981 2609 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:33:36.859789 kubelet[2609]: E0302 13:33:36.858336 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:33:36.873223 kubelet[2609]: I0302 13:33:36.873194 2609 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:33:36.873378 kubelet[2609]: I0302 13:33:36.873365 2609 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:33:36.873538 kubelet[2609]: I0302 13:33:36.873519 2609 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:33:36.960053 kubelet[2609]: E0302 13:33:36.959669 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:33:36.961880 kubelet[2609]: E0302 13:33:36.961844 2609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="400ms" Mar 2 13:33:36.993015 kubelet[2609]: I0302 13:33:36.981404 2609 policy_none.go:49] "None policy: Start" Mar 2 13:33:36.993015 kubelet[2609]: I0302 13:33:36.983354 2609 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 13:33:37.001654 kubelet[2609]: I0302 13:33:36.998162 2609 state_mem.go:35] "Initializing new in-memory state store" Mar 2 13:33:37.060986 kubelet[2609]: E0302 13:33:37.060951 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:33:37.064863 kubelet[2609]: I0302 13:33:37.062726 2609 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 13:33:37.080079 kubelet[2609]: I0302 13:33:37.077737 2609 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 13:33:37.080079 kubelet[2609]: I0302 13:33:37.077763 2609 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 13:33:37.080079 kubelet[2609]: I0302 13:33:37.077916 2609 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:33:37.080079 kubelet[2609]: I0302 13:33:37.077931 2609 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 13:33:37.080079 kubelet[2609]: E0302 13:33:37.077994 2609 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:33:37.091289 kubelet[2609]: E0302 13:33:37.091243 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:33:37.164030 kubelet[2609]: E0302 13:33:37.163889 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:33:37.168043 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:33:37.181026 kubelet[2609]: E0302 13:33:37.180984 2609 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:33:37.254385 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:33:37.268788 kubelet[2609]: E0302 13:33:37.267882 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:33:37.292090 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:33:37.328761 kubelet[2609]: E0302 13:33:37.324706 2609 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:33:37.328761 kubelet[2609]: I0302 13:33:37.324982 2609 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:33:37.328761 kubelet[2609]: I0302 13:33:37.324996 2609 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:33:37.328761 kubelet[2609]: I0302 13:33:37.326728 2609 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:33:37.342858 kubelet[2609]: E0302 13:33:37.339536 2609 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:33:37.342858 kubelet[2609]: E0302 13:33:37.339715 2609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:33:37.387540 kubelet[2609]: E0302 13:33:37.387387 2609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="800ms" Mar 2 13:33:37.443687 kubelet[2609]: I0302 13:33:37.443359 2609 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:33:37.444336 kubelet[2609]: E0302 13:33:37.444276 2609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Mar 2 13:33:37.498177 kubelet[2609]: I0302 13:33:37.492090 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c3a8945630faa7859e98ee66162bf89-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c3a8945630faa7859e98ee66162bf89\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:33:37.498177 kubelet[2609]: I0302 13:33:37.492140 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c3a8945630faa7859e98ee66162bf89-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c3a8945630faa7859e98ee66162bf89\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:33:37.498177 kubelet[2609]: I0302 13:33:37.492172 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c3a8945630faa7859e98ee66162bf89-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c3a8945630faa7859e98ee66162bf89\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:33:37.498177 kubelet[2609]: I0302 13:33:37.492199 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:33:37.498177 kubelet[2609]: I0302 13:33:37.492219 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:33:37.504117 kubelet[2609]: I0302 13:33:37.492243 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:33:37.504117 kubelet[2609]: I0302 13:33:37.492265 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:33:37.504117 kubelet[2609]: I0302 13:33:37.503744 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:33:37.551828 systemd[1]: Created slice kubepods-burstable-pod6c3a8945630faa7859e98ee66162bf89.slice - libcontainer container kubepods-burstable-pod6c3a8945630faa7859e98ee66162bf89.slice. Mar 2 13:33:37.594725 kubelet[2609]: E0302 13:33:37.594124 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:37.609100 kubelet[2609]: I0302 13:33:37.609055 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:33:37.627282 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 2 13:33:37.713444 kubelet[2609]: I0302 13:33:37.706186 2609 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:33:37.715832 kubelet[2609]: E0302 13:33:37.715355 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:37.718131 kubelet[2609]: E0302 13:33:37.718092 2609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Mar 2 13:33:37.732037 kubelet[2609]: E0302 13:33:37.720425 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:37.735737 containerd[1564]: time="2026-03-02T13:33:37.734945462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 2 13:33:37.816536 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 2 13:33:37.841840 kubelet[2609]: E0302 13:33:37.841140 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:37.843161 kubelet[2609]: E0302 13:33:37.843132 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:37.855763 containerd[1564]: time="2026-03-02T13:33:37.851358508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 2 13:33:37.934034 kubelet[2609]: E0302 13:33:37.927371 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:37.954270 containerd[1564]: time="2026-03-02T13:33:37.943020168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c3a8945630faa7859e98ee66162bf89,Namespace:kube-system,Attempt:0,}" Mar 2 13:33:38.021192 kubelet[2609]: E0302 13:33:37.995121 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:33:38.021192 kubelet[2609]: E0302 13:33:38.019940 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:33:38.219311 kubelet[2609]: E0302 13:33:38.214037 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:33:38.219311 kubelet[2609]: E0302 13:33:38.215441 2609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="1.6s" Mar 2 13:33:38.245362 kubelet[2609]: E0302 13:33:38.214457 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:33:38.344189 kubelet[2609]: I0302 13:33:38.338176 2609 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:33:38.348869 kubelet[2609]: E0302 13:33:38.344431 2609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Mar 2 13:33:38.726756 kubelet[2609]: E0302 13:33:38.660321 2609 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:33:39.133830 kubelet[2609]: E0302 13:33:39.036816 2609 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899098a0293c3e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:33:36.700376039 +0000 UTC m=+2.134221581,LastTimestamp:2026-03-02 13:33:36.700376039 +0000 UTC m=+2.134221581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:33:44.019467 kubelet[2609]: E0302 13:33:44.018935 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:33:44.019467 kubelet[2609]: E0302 13:33:44.051866 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:33:44.019467 kubelet[2609]: E0302 13:33:44.029898 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:33:44.019467 kubelet[2609]: E0302 13:33:44.019557 2609 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:33:44.161015 kubelet[2609]: E0302 13:33:44.059206 2609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="3.2s" Mar 2 13:33:44.161015 kubelet[2609]: E0302 13:33:44.062312 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:33:44.161015 kubelet[2609]: I0302 13:33:44.144892 2609 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:33:44.161015 kubelet[2609]: E0302 13:33:44.146513 2609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Mar 2 13:33:45.155225 containerd[1564]: time="2026-03-02T13:33:45.151442189Z" level=info msg="connecting to shim 6278eededc694c0639d8d529c95c5abda60d7412d5bfc221feeab4f6b69ca4c8" address="unix:///run/containerd/s/e4788b7f404ccb0b7394a176c45007151da80fe32850f4cecc88ffea6316164b" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:33:45.183211 containerd[1564]: time="2026-03-02T13:33:45.183158636Z" level=info msg="connecting to shim 84ad3ea4bbc0bb53ff847c5f69fe1e73b875421325a49f2281e7ce5ccc7862da" address="unix:///run/containerd/s/af7d3b1aa0594599de36ac24ac1f16cf28c161c288013573aa8ffc7494323903" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:33:45.302203 containerd[1564]: time="2026-03-02T13:33:45.299830102Z" level=info msg="connecting to shim 17324997985f371d2e38bb3f47ea6eaad5b856261dcaa2c8b417b05757af9a5c" address="unix:///run/containerd/s/74e4171a7807300e8253e619de4ed7aeff9335d3cca76396f253eb0166b3bee1" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:33:45.901883 kubelet[2609]: I0302 13:33:45.901404 2609 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:33:45.906214 kubelet[2609]: E0302 13:33:45.902261 2609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Mar 2 13:33:46.102175 systemd[1]: Started cri-containerd-6278eededc694c0639d8d529c95c5abda60d7412d5bfc221feeab4f6b69ca4c8.scope - libcontainer container 6278eededc694c0639d8d529c95c5abda60d7412d5bfc221feeab4f6b69ca4c8. Mar 2 13:33:47.306431 kubelet[2609]: E0302 13:33:47.301173 2609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="6.4s" Mar 2 13:33:47.349709 kubelet[2609]: E0302 13:33:47.349469 2609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:33:47.380934 systemd[1]: Started cri-containerd-17324997985f371d2e38bb3f47ea6eaad5b856261dcaa2c8b417b05757af9a5c.scope - libcontainer container 17324997985f371d2e38bb3f47ea6eaad5b856261dcaa2c8b417b05757af9a5c. Mar 2 13:33:48.218291 kubelet[2609]: E0302 13:33:48.208834 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:33:48.218291 kubelet[2609]: E0302 13:33:48.212530 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:33:48.230144 kubelet[2609]: E0302 13:33:48.225046 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:33:49.240547 kubelet[2609]: E0302 13:33:49.135158 2609 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899098a0293c3e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:33:36.700376039 +0000 UTC m=+2.134221581,LastTimestamp:2026-03-02 13:33:36.700376039 +0000 UTC m=+2.134221581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:33:49.312093 kubelet[2609]: I0302 13:33:49.311028 2609 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:33:49.337514 kubelet[2609]: E0302 13:33:49.316006 2609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" Mar 2 13:33:49.884477 systemd[1]: Started cri-containerd-84ad3ea4bbc0bb53ff847c5f69fe1e73b875421325a49f2281e7ce5ccc7862da.scope - libcontainer container 84ad3ea4bbc0bb53ff847c5f69fe1e73b875421325a49f2281e7ce5ccc7862da. Mar 2 13:33:50.281265 kubelet[2609]: E0302 13:33:50.279356 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:33:50.615775 containerd[1564]: time="2026-03-02T13:33:50.615363406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"6278eededc694c0639d8d529c95c5abda60d7412d5bfc221feeab4f6b69ca4c8\"" Mar 2 13:33:50.622737 kubelet[2609]: E0302 13:33:50.622109 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:50.775936 containerd[1564]: time="2026-03-02T13:33:50.773272580Z" level=info msg="CreateContainer within sandbox \"6278eededc694c0639d8d529c95c5abda60d7412d5bfc221feeab4f6b69ca4c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:33:50.789540 containerd[1564]: time="2026-03-02T13:33:50.787775353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c3a8945630faa7859e98ee66162bf89,Namespace:kube-system,Attempt:0,} returns sandbox id \"17324997985f371d2e38bb3f47ea6eaad5b856261dcaa2c8b417b05757af9a5c\"" Mar 2 13:33:50.794991 kubelet[2609]: E0302 13:33:50.794914 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:50.826370 containerd[1564]: time="2026-03-02T13:33:50.826266961Z" level=info msg="CreateContainer within sandbox \"17324997985f371d2e38bb3f47ea6eaad5b856261dcaa2c8b417b05757af9a5c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:33:51.001469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2931709868.mount: Deactivated successfully. Mar 2 13:33:51.059189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458192806.mount: Deactivated successfully. Mar 2 13:33:51.094084 containerd[1564]: time="2026-03-02T13:33:51.092256057Z" level=info msg="Container 38a3886f20a31af6fe26d7af20875d31dbc271030a022154731904a2aec9df3e: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:33:51.116016 containerd[1564]: time="2026-03-02T13:33:51.115140698Z" level=info msg="Container aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:33:51.209239 containerd[1564]: time="2026-03-02T13:33:51.209170677Z" level=info msg="CreateContainer within sandbox \"17324997985f371d2e38bb3f47ea6eaad5b856261dcaa2c8b417b05757af9a5c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"38a3886f20a31af6fe26d7af20875d31dbc271030a022154731904a2aec9df3e\"" Mar 2 13:33:51.223240 containerd[1564]: time="2026-03-02T13:33:51.219949201Z" level=info msg="StartContainer for \"38a3886f20a31af6fe26d7af20875d31dbc271030a022154731904a2aec9df3e\"" Mar 2 13:33:51.239706 containerd[1564]: time="2026-03-02T13:33:51.235746475Z" level=info msg="connecting to shim 38a3886f20a31af6fe26d7af20875d31dbc271030a022154731904a2aec9df3e" address="unix:///run/containerd/s/74e4171a7807300e8253e619de4ed7aeff9335d3cca76396f253eb0166b3bee1" protocol=ttrpc version=3 Mar 2 13:33:51.262133 containerd[1564]: time="2026-03-02T13:33:51.260156352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"84ad3ea4bbc0bb53ff847c5f69fe1e73b875421325a49f2281e7ce5ccc7862da\"" Mar 2 13:33:51.290008 kubelet[2609]: E0302 13:33:51.288525 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:51.381974 containerd[1564]: time="2026-03-02T13:33:51.381820728Z" level=info msg="CreateContainer within sandbox \"6278eededc694c0639d8d529c95c5abda60d7412d5bfc221feeab4f6b69ca4c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60\"" Mar 2 13:33:51.403170 containerd[1564]: time="2026-03-02T13:33:51.399013795Z" level=info msg="StartContainer for \"aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60\"" Mar 2 13:33:51.414730 containerd[1564]: time="2026-03-02T13:33:51.410991181Z" level=info msg="connecting to shim aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60" address="unix:///run/containerd/s/e4788b7f404ccb0b7394a176c45007151da80fe32850f4cecc88ffea6316164b" protocol=ttrpc version=3 Mar 2 13:33:51.477425 containerd[1564]: time="2026-03-02T13:33:51.466180957Z" level=info msg="CreateContainer within sandbox \"84ad3ea4bbc0bb53ff847c5f69fe1e73b875421325a49f2281e7ce5ccc7862da\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:33:51.579220 systemd[1]: Started cri-containerd-38a3886f20a31af6fe26d7af20875d31dbc271030a022154731904a2aec9df3e.scope - libcontainer container 38a3886f20a31af6fe26d7af20875d31dbc271030a022154731904a2aec9df3e. Mar 2 13:33:51.580145 containerd[1564]: time="2026-03-02T13:33:51.579265470Z" level=info msg="Container 2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:33:51.619463 systemd[1]: Started cri-containerd-aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60.scope - libcontainer container aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60. Mar 2 13:33:51.672790 containerd[1564]: time="2026-03-02T13:33:51.668969621Z" level=info msg="CreateContainer within sandbox \"84ad3ea4bbc0bb53ff847c5f69fe1e73b875421325a49f2281e7ce5ccc7862da\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1\"" Mar 2 13:33:51.762403 containerd[1564]: time="2026-03-02T13:33:51.673798033Z" level=info msg="StartContainer for \"2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1\"" Mar 2 13:33:51.762403 containerd[1564]: time="2026-03-02T13:33:51.755109351Z" level=info msg="connecting to shim 2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1" address="unix:///run/containerd/s/af7d3b1aa0594599de36ac24ac1f16cf28c161c288013573aa8ffc7494323903" protocol=ttrpc version=3 Mar 2 13:33:52.122967 systemd[1]: Started cri-containerd-2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1.scope - libcontainer container 2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1. Mar 2 13:33:52.521201 containerd[1564]: time="2026-03-02T13:33:52.515480091Z" level=info msg="StartContainer for \"38a3886f20a31af6fe26d7af20875d31dbc271030a022154731904a2aec9df3e\" returns successfully" Mar 2 13:33:52.525210 containerd[1564]: time="2026-03-02T13:33:52.521840160Z" level=info msg="StartContainer for \"aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60\" returns successfully" Mar 2 13:33:52.621292 kubelet[2609]: E0302 13:33:52.621142 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:52.626718 kubelet[2609]: E0302 13:33:52.623538 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:52.640434 kubelet[2609]: E0302 13:33:52.636296 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:52.640434 kubelet[2609]: E0302 13:33:52.636458 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:52.810998 kubelet[2609]: E0302 13:33:52.810733 2609 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:33:52.862430 containerd[1564]: time="2026-03-02T13:33:52.855257855Z" level=info msg="StartContainer for \"2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1\" returns successfully" Mar 2 13:33:53.821482 kubelet[2609]: E0302 13:33:53.805245 2609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="7s" Mar 2 13:33:53.913326 kubelet[2609]: E0302 13:33:53.912035 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:53.922039 kubelet[2609]: E0302 13:33:53.920320 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:53.925140 kubelet[2609]: E0302 13:33:53.925112 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:53.926292 kubelet[2609]: E0302 13:33:53.926268 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:53.937217 kubelet[2609]: E0302 13:33:53.936858 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:53.937217 kubelet[2609]: E0302 13:33:53.937138 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:54.964807 kubelet[2609]: E0302 13:33:54.962031 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:33:55.094530 kubelet[2609]: E0302 13:33:55.094485 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:55.103150 kubelet[2609]: E0302 13:33:55.101893 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:55.113883 kubelet[2609]: E0302 13:33:55.113490 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:55.113883 kubelet[2609]: E0302 13:33:55.113815 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:55.453382 kubelet[2609]: E0302 13:33:55.449364 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:55.453382 kubelet[2609]: E0302 13:33:55.453128 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:56.153166 kubelet[2609]: I0302 13:33:56.152172 2609 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:33:56.434801 kubelet[2609]: E0302 13:33:56.411152 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:56.434801 kubelet[2609]: E0302 13:33:56.411872 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:56.434801 kubelet[2609]: E0302 13:33:56.414885 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:33:56.434801 kubelet[2609]: E0302 13:33:56.415135 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:33:57.395297 kubelet[2609]: E0302 13:33:57.391855 2609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:34:02.583750 kubelet[2609]: E0302 13:34:02.583254 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:34:02.583750 kubelet[2609]: E0302 13:34:02.585827 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:34:05.527785 kubelet[2609]: E0302 13:34:05.524517 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:34:05.543909 kubelet[2609]: E0302 13:34:05.532047 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:34:06.332922 kubelet[2609]: E0302 13:34:06.319533 2609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 2 13:34:07.426030 kubelet[2609]: E0302 13:34:07.419305 2609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:34:09.539483 kubelet[2609]: E0302 13:34:09.518882 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:34:09.539483 kubelet[2609]: E0302 13:34:09.526886 2609 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.75:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.1899098a0293c3e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:33:36.700376039 +0000 UTC m=+2.134221581,LastTimestamp:2026-03-02 13:33:36.700376039 +0000 UTC m=+2.134221581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:34:09.539483 kubelet[2609]: E0302 13:34:09.528885 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:34:09.878794 kubelet[2609]: E0302 13:34:09.856141 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:34:10.820021 kubelet[2609]: E0302 13:34:10.817502 2609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 2 13:34:13.388539 kubelet[2609]: I0302 13:34:13.376140 2609 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:34:17.453123 kubelet[2609]: E0302 13:34:17.450219 2609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:34:18.220430 kubelet[2609]: E0302 13:34:18.217095 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:34:19.682095 kubelet[2609]: E0302 13:34:19.678914 2609 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:34:19.682095 kubelet[2609]: E0302 13:34:19.682256 2609 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:34:23.614736 kubelet[2609]: E0302 13:34:23.611729 2609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 2 13:34:27.861170 kubelet[2609]: E0302 13:34:27.856078 2609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Mar 2 13:34:27.919789 kubelet[2609]: E0302 13:34:27.915837 2609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:34:29.818962 kubelet[2609]: E0302 13:34:29.810325 2609 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.75:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.1899098a0293c3e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:33:36.700376039 +0000 UTC m=+2.134221581,LastTimestamp:2026-03-02 13:33:36.700376039 +0000 UTC m=+2.134221581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:34:30.730773 kubelet[2609]: I0302 13:34:30.726188 2609 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:34:31.244735 kubelet[2609]: E0302 13:34:31.232218 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:34:31.246809 kubelet[2609]: E0302 13:34:31.246778 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:34:37.921663 kubelet[2609]: E0302 13:34:37.918084 2609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:34:38.130843 kubelet[2609]: E0302 13:34:38.130084 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:34:38.446718 kubelet[2609]: E0302 13:34:38.446126 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:34:40.747172 kubelet[2609]: E0302 13:34:40.742007 2609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 2 13:34:41.487470 kubelet[2609]: E0302 13:34:41.486088 2609 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:34:44.869511 kubelet[2609]: E0302 13:34:44.866488 2609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 2 13:34:47.937396 kubelet[2609]: E0302 13:34:47.929966 2609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:34:48.002863 kubelet[2609]: I0302 13:34:47.999133 2609 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:34:48.169999 kubelet[2609]: E0302 13:34:48.165019 2609 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1899098a0293c3e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:33:36.700376039 +0000 UTC m=+2.134221581,LastTimestamp:2026-03-02 13:33:36.700376039 +0000 UTC m=+2.134221581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:34:48.376518 kubelet[2609]: I0302 13:34:48.361543 2609 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 13:34:48.376518 kubelet[2609]: E0302 13:34:48.388787 2609 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 2 13:34:48.732082 kubelet[2609]: E0302 13:34:48.711087 2609 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1899098a046c33c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:33:36.731337667 +0000 UTC m=+2.165183219,LastTimestamp:2026-03-02 13:33:36.731337667 +0000 UTC m=+2.165183219,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:34:49.562525 kubelet[2609]: E0302 13:34:49.540270 2609 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1899098a0c807a57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:33:36.866884183 +0000 UTC m=+2.300729755,LastTimestamp:2026-03-02 13:33:36.866884183 +0000 UTC m=+2.300729755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:34:50.415240 kubelet[2609]: E0302 13:34:50.413029 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:50.519157 kubelet[2609]: E0302 13:34:50.517955 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:50.624105 kubelet[2609]: E0302 13:34:50.620024 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:51.847429 kubelet[2609]: E0302 13:34:50.733482 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:51.847429 kubelet[2609]: E0302 13:34:51.018521 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:51.847429 kubelet[2609]: E0302 13:34:51.762029 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:51.876363 kubelet[2609]: E0302 13:34:51.876321 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:52.007068 kubelet[2609]: E0302 13:34:52.004072 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:52.638342 kubelet[2609]: E0302 13:34:52.637229 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:52.762049 kubelet[2609]: E0302 13:34:52.762003 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:52.896461 kubelet[2609]: E0302 13:34:52.872537 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:53.000097 kubelet[2609]: E0302 13:34:53.000036 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:53.111997 kubelet[2609]: E0302 13:34:53.111938 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:53.220520 kubelet[2609]: E0302 13:34:53.219272 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:53.616278 kubelet[2609]: E0302 13:34:53.566333 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:53.738048 kubelet[2609]: E0302 13:34:53.738009 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:53.857277 kubelet[2609]: E0302 13:34:53.857194 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:53.981244 kubelet[2609]: E0302 13:34:53.957502 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:54.090146 kubelet[2609]: E0302 13:34:54.090103 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:54.194332 kubelet[2609]: E0302 13:34:54.193960 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:54.296537 kubelet[2609]: E0302 13:34:54.296399 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:54.430292 kubelet[2609]: E0302 13:34:54.427341 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:54.587527 kubelet[2609]: E0302 13:34:54.572092 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:54.782945 kubelet[2609]: E0302 13:34:54.779415 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:54.888103 kubelet[2609]: E0302 13:34:54.887025 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:54.988108 kubelet[2609]: E0302 13:34:54.988041 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:55.089110 kubelet[2609]: E0302 13:34:55.088338 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:55.191411 kubelet[2609]: E0302 13:34:55.190979 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:55.517211 kubelet[2609]: E0302 13:34:55.462367 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:55.701438 kubelet[2609]: E0302 13:34:55.587374 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:55.905165 kubelet[2609]: E0302 13:34:55.900137 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:56.002367 kubelet[2609]: E0302 13:34:56.002305 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:56.103061 kubelet[2609]: E0302 13:34:56.103006 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:56.212887 kubelet[2609]: E0302 13:34:56.212751 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:56.325126 kubelet[2609]: E0302 13:34:56.325084 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:56.482055 kubelet[2609]: E0302 13:34:56.451168 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:56.574416 kubelet[2609]: E0302 13:34:56.567446 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:57.055507 kubelet[2609]: E0302 13:34:56.678528 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:57.215757 kubelet[2609]: E0302 13:34:57.152207 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:57.322290 kubelet[2609]: E0302 13:34:57.315786 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:57.424223 kubelet[2609]: E0302 13:34:57.419542 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:57.526321 kubelet[2609]: E0302 13:34:57.526257 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:57.628107 kubelet[2609]: E0302 13:34:57.627977 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:57.742084 kubelet[2609]: E0302 13:34:57.730244 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:57.836078 kubelet[2609]: E0302 13:34:57.831344 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:57.938968 kubelet[2609]: E0302 13:34:57.933361 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:57.954767 kubelet[2609]: E0302 13:34:57.949484 2609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:34:58.044275 kubelet[2609]: E0302 13:34:58.041191 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:58.372344 kubelet[2609]: E0302 13:34:58.203799 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:58.544231 kubelet[2609]: E0302 13:34:58.311456 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:58.544231 kubelet[2609]: E0302 13:34:58.500294 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:58.815253 kubelet[2609]: E0302 13:34:58.783397 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:58.894099 kubelet[2609]: E0302 13:34:58.884943 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:58.999777 kubelet[2609]: E0302 13:34:58.990766 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:59.091544 kubelet[2609]: E0302 13:34:59.091402 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:59.632236 kubelet[2609]: E0302 13:34:59.392490 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:59.813040 kubelet[2609]: E0302 13:34:59.810460 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:34:59.928431 kubelet[2609]: E0302 13:34:59.925258 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:00.065459 kubelet[2609]: E0302 13:35:00.026303 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:00.189514 kubelet[2609]: E0302 13:35:00.171177 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:00.323480 kubelet[2609]: E0302 13:35:00.323434 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:00.431947 kubelet[2609]: E0302 13:35:00.428071 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:00.723156 kubelet[2609]: E0302 13:35:00.570150 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:00.837465 kubelet[2609]: E0302 13:35:00.744988 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:00.837465 kubelet[2609]: E0302 13:35:00.745264 2609 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 2 13:35:01.038832 kubelet[2609]: E0302 13:35:00.986072 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:01.087353 kubelet[2609]: E0302 13:35:01.087260 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:01.197031 kubelet[2609]: E0302 13:35:01.190187 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:01.350984 kubelet[2609]: E0302 13:35:01.342279 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:01.462144 kubelet[2609]: E0302 13:35:01.460937 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:01.593945 kubelet[2609]: E0302 13:35:01.584424 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:01.717177 kubelet[2609]: E0302 13:35:01.695229 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:01.842140 kubelet[2609]: E0302 13:35:01.841192 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:01.987216 kubelet[2609]: E0302 13:35:01.985354 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:02.105012 kubelet[2609]: E0302 13:35:02.104278 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:02.206133 kubelet[2609]: E0302 13:35:02.206093 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:02.412952 kubelet[2609]: E0302 13:35:02.337143 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:02.456191 kubelet[2609]: E0302 13:35:02.451955 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:02.583371 kubelet[2609]: E0302 13:35:02.577850 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:02.679898 kubelet[2609]: E0302 13:35:02.679459 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:02.785996 kubelet[2609]: E0302 13:35:02.785469 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:02.892125 kubelet[2609]: E0302 13:35:02.891435 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:02.993078 kubelet[2609]: E0302 13:35:02.992911 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:03.157149 kubelet[2609]: E0302 13:35:03.156117 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:03.296251 kubelet[2609]: E0302 13:35:03.282189 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:03.559192 kubelet[2609]: E0302 13:35:03.445497 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:03.586203 kubelet[2609]: E0302 13:35:03.583174 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:03.690834 kubelet[2609]: E0302 13:35:03.687075 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:03.789136 kubelet[2609]: E0302 13:35:03.789080 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:03.891348 kubelet[2609]: E0302 13:35:03.891287 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:04.007238 kubelet[2609]: E0302 13:35:04.004488 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:04.145137 kubelet[2609]: E0302 13:35:04.131039 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:04.234008 kubelet[2609]: E0302 13:35:04.233025 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:04.340015 kubelet[2609]: E0302 13:35:04.335201 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:04.447476 kubelet[2609]: E0302 13:35:04.444021 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:04.551321 kubelet[2609]: E0302 13:35:04.551271 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:04.654453 kubelet[2609]: E0302 13:35:04.654036 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:04.758353 kubelet[2609]: E0302 13:35:04.758021 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:04.865239 kubelet[2609]: E0302 13:35:04.865194 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:04.968997 kubelet[2609]: E0302 13:35:04.968372 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:05.154122 kubelet[2609]: E0302 13:35:05.107500 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:05.257450 kubelet[2609]: E0302 13:35:05.257372 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:05.297490 kubelet[2609]: E0302 13:35:05.291241 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:35:05.297490 kubelet[2609]: E0302 13:35:05.291436 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:05.374323 kubelet[2609]: E0302 13:35:05.374274 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:05.522490 kubelet[2609]: E0302 13:35:05.517936 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:05.621447 kubelet[2609]: E0302 13:35:05.619301 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:05.738366 kubelet[2609]: E0302 13:35:05.737308 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:05.895021 kubelet[2609]: E0302 13:35:05.878503 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:05.987376 kubelet[2609]: E0302 13:35:05.987292 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:06.097977 kubelet[2609]: E0302 13:35:06.097912 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:06.199998 kubelet[2609]: E0302 13:35:06.199375 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:06.301256 kubelet[2609]: E0302 13:35:06.301205 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:06.405329 kubelet[2609]: E0302 13:35:06.405242 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:06.511849 kubelet[2609]: E0302 13:35:06.510318 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:06.617867 kubelet[2609]: E0302 13:35:06.616492 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:06.742422 kubelet[2609]: E0302 13:35:06.742379 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:06.850147 kubelet[2609]: E0302 13:35:06.849361 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:06.957537 kubelet[2609]: E0302 13:35:06.952232 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:07.060542 kubelet[2609]: E0302 13:35:07.058329 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:07.169174 kubelet[2609]: E0302 13:35:07.166120 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:07.277388 kubelet[2609]: E0302 13:35:07.274404 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:07.382191 kubelet[2609]: E0302 13:35:07.382121 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:07.504945 kubelet[2609]: E0302 13:35:07.504314 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:07.614932 kubelet[2609]: E0302 13:35:07.614506 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:07.723182 kubelet[2609]: E0302 13:35:07.719352 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:07.828073 kubelet[2609]: E0302 13:35:07.827933 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:07.968114 kubelet[2609]: E0302 13:35:07.968066 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:07.968543 kubelet[2609]: E0302 13:35:07.968435 2609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:35:08.078019 kubelet[2609]: E0302 13:35:08.070205 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:08.179390 kubelet[2609]: E0302 13:35:08.179189 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:08.285856 kubelet[2609]: E0302 13:35:08.283862 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:08.387966 kubelet[2609]: E0302 13:35:08.387458 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:08.489062 kubelet[2609]: E0302 13:35:08.488221 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:08.592899 kubelet[2609]: E0302 13:35:08.591857 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:08.696016 kubelet[2609]: E0302 13:35:08.695949 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:08.807537 kubelet[2609]: E0302 13:35:08.807003 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:08.919200 kubelet[2609]: E0302 13:35:08.914430 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:09.031943 kubelet[2609]: E0302 13:35:09.029062 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:09.132080 kubelet[2609]: E0302 13:35:09.132017 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:09.232866 kubelet[2609]: E0302 13:35:09.232719 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:09.344864 kubelet[2609]: E0302 13:35:09.337440 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:09.445886 kubelet[2609]: E0302 13:35:09.445031 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:09.546863 kubelet[2609]: E0302 13:35:09.546122 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:09.651306 kubelet[2609]: E0302 13:35:09.651258 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:09.773305 kubelet[2609]: E0302 13:35:09.770394 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:09.878348 kubelet[2609]: E0302 13:35:09.877116 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:09.981548 kubelet[2609]: E0302 13:35:09.980071 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:10.082314 kubelet[2609]: E0302 13:35:10.082267 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:10.183069 kubelet[2609]: E0302 13:35:10.183008 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:10.286882 kubelet[2609]: E0302 13:35:10.286396 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:10.403236 kubelet[2609]: E0302 13:35:10.391735 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:10.505998 kubelet[2609]: E0302 13:35:10.505859 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:10.615342 kubelet[2609]: E0302 13:35:10.615287 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:10.732067 kubelet[2609]: E0302 13:35:10.728380 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:10.833401 kubelet[2609]: E0302 13:35:10.833206 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:10.942896 kubelet[2609]: E0302 13:35:10.941251 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:11.050165 kubelet[2609]: E0302 13:35:11.041852 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:11.144261 kubelet[2609]: E0302 13:35:11.144111 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:11.256126 kubelet[2609]: E0302 13:35:11.252505 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:11.318491 kubelet[2609]: E0302 13:35:11.316307 2609 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 2 13:35:11.375276 kubelet[2609]: E0302 13:35:11.373995 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:11.480226 kubelet[2609]: E0302 13:35:11.480172 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:11.582230 kubelet[2609]: E0302 13:35:11.582173 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:11.704480 kubelet[2609]: E0302 13:35:11.703175 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:11.811167 kubelet[2609]: E0302 13:35:11.811100 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:11.925739 kubelet[2609]: E0302 13:35:11.914968 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:12.030169 kubelet[2609]: E0302 13:35:12.020243 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:12.121080 kubelet[2609]: E0302 13:35:12.121035 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:12.223750 kubelet[2609]: E0302 13:35:12.223301 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:12.326181 kubelet[2609]: E0302 13:35:12.324215 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:12.444242 kubelet[2609]: E0302 13:35:12.444197 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:12.555095 kubelet[2609]: E0302 13:35:12.554980 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:12.656158 kubelet[2609]: E0302 13:35:12.656110 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:12.771916 kubelet[2609]: E0302 13:35:12.767339 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:12.869409 kubelet[2609]: E0302 13:35:12.869357 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:12.979955 kubelet[2609]: E0302 13:35:12.979879 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:13.083141 kubelet[2609]: E0302 13:35:13.083093 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:13.203066 kubelet[2609]: E0302 13:35:13.197069 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:13.304514 kubelet[2609]: E0302 13:35:13.297721 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:13.400155 kubelet[2609]: E0302 13:35:13.398879 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:13.518351 kubelet[2609]: E0302 13:35:13.505983 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:13.613465 kubelet[2609]: E0302 13:35:13.613269 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:13.717030 kubelet[2609]: E0302 13:35:13.714107 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:13.818144 kubelet[2609]: E0302 13:35:13.816181 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:13.929368 kubelet[2609]: E0302 13:35:13.918090 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:14.025525 kubelet[2609]: E0302 13:35:14.021131 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:14.109380 systemd[1]: Reload requested from client PID 2914 ('systemctl') (unit session-9.scope)... Mar 2 13:35:14.110233 systemd[1]: Reloading... Mar 2 13:35:14.137260 kubelet[2609]: E0302 13:35:14.137204 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:14.149073 kubelet[2609]: E0302 13:35:14.146317 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:35:14.149073 kubelet[2609]: E0302 13:35:14.146507 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:14.238143 kubelet[2609]: E0302 13:35:14.237351 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:14.340907 kubelet[2609]: E0302 13:35:14.339870 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:14.444153 kubelet[2609]: E0302 13:35:14.442521 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:14.560078 kubelet[2609]: E0302 13:35:14.550338 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:14.652511 kubelet[2609]: E0302 13:35:14.652469 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:14.778068 kubelet[2609]: E0302 13:35:14.772404 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:14.876210 kubelet[2609]: E0302 13:35:14.872992 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:14.974267 kubelet[2609]: E0302 13:35:14.974219 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:15.016199 kubelet[2609]: E0302 13:35:15.006444 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:35:15.016199 kubelet[2609]: E0302 13:35:15.007057 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:15.065040 zram_generator::config[2957]: No configuration found. Mar 2 13:35:15.078934 kubelet[2609]: E0302 13:35:15.075374 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:15.191251 kubelet[2609]: E0302 13:35:15.184202 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:15.298090 kubelet[2609]: E0302 13:35:15.296767 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:15.412931 kubelet[2609]: E0302 13:35:15.412373 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:15.523927 kubelet[2609]: E0302 13:35:15.516977 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:15.623248 kubelet[2609]: E0302 13:35:15.621252 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:15.728230 kubelet[2609]: E0302 13:35:15.728079 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:15.843950 kubelet[2609]: E0302 13:35:15.843415 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:15.949892 kubelet[2609]: E0302 13:35:15.949736 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:16.115353 kubelet[2609]: E0302 13:35:16.094374 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:16.149370 kubelet[2609]: E0302 13:35:16.134449 2609 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:35:16.149370 kubelet[2609]: E0302 13:35:16.134968 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:16.201007 kubelet[2609]: E0302 13:35:16.195493 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:16.340122 kubelet[2609]: E0302 13:35:16.334525 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:16.626542 kubelet[2609]: E0302 13:35:16.500350 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:16.894228 kubelet[2609]: E0302 13:35:16.843337 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:17.011460 kubelet[2609]: E0302 13:35:17.010923 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:17.334119 kubelet[2609]: E0302 13:35:17.323210 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:17.453784 kubelet[2609]: E0302 13:35:17.452471 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:17.689514 kubelet[2609]: E0302 13:35:17.635487 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:17.961884 kubelet[2609]: E0302 13:35:17.951766 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:17.982256 kubelet[2609]: E0302 13:35:17.979423 2609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:35:18.204412 kubelet[2609]: E0302 13:35:18.202220 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:18.346130 kubelet[2609]: E0302 13:35:18.345780 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:18.461241 kubelet[2609]: E0302 13:35:18.452266 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:18.564247 kubelet[2609]: E0302 13:35:18.564187 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:18.672476 kubelet[2609]: E0302 13:35:18.671956 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:18.842331 kubelet[2609]: E0302 13:35:18.826084 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:19.019390 kubelet[2609]: E0302 13:35:18.972430 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:19.811489 kubelet[2609]: E0302 13:35:19.245042 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:19.828304 kubelet[2609]: E0302 13:35:19.820200 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:19.938236 kubelet[2609]: E0302 13:35:19.934279 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:19.991281 systemd[1]: Reloading finished in 5879 ms. Mar 2 13:35:20.038245 kubelet[2609]: E0302 13:35:20.036308 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:20.142271 kubelet[2609]: E0302 13:35:20.139905 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:20.595017 kubelet[2609]: E0302 13:35:20.494762 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:20.586061 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:35:20.619456 kubelet[2609]: E0302 13:35:20.617708 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:20.824741 kubelet[2609]: E0302 13:35:20.822704 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:21.049191 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:35:21.049704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:35:21.049778 systemd[1]: kubelet.service: Consumed 12.948s CPU time, 137.5M memory peak. Mar 2 13:35:21.108448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:35:22.782289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:35:22.894712 (kubelet)[3001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:35:23.352467 kubelet[3001]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:35:23.352467 kubelet[3001]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:35:23.352467 kubelet[3001]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:35:23.352467 kubelet[3001]: I0302 13:35:23.350438 3001 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:35:23.476963 kubelet[3001]: I0302 13:35:23.476910 3001 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 13:35:23.477189 kubelet[3001]: I0302 13:35:23.477168 3001 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:35:23.477773 kubelet[3001]: I0302 13:35:23.477748 3001 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:35:23.493438 kubelet[3001]: I0302 13:35:23.493397 3001 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:35:23.555114 kubelet[3001]: I0302 13:35:23.553679 3001 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:35:23.709215 kubelet[3001]: I0302 13:35:23.694159 3001 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 2 13:35:23.791732 kubelet[3001]: I0302 13:35:23.789784 3001 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 13:35:23.791732 kubelet[3001]: I0302 13:35:23.790462 3001 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:35:23.791732 kubelet[3001]: I0302 13:35:23.790508 3001 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:35:23.791732 kubelet[3001]: I0302 13:35:23.791004 3001 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:35:23.796286 kubelet[3001]: I0302 13:35:23.791019 3001 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 13:35:23.796286 kubelet[3001]: I0302 13:35:23.791083 3001 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:35:23.796286 kubelet[3001]: I0302 13:35:23.791319 3001 kubelet.go:480] "Attempting to sync node with API server" Mar 2 13:35:23.796286 kubelet[3001]: I0302 13:35:23.791338 3001 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:35:23.796286 kubelet[3001]: I0302 13:35:23.791371 3001 kubelet.go:386] "Adding apiserver pod source" Mar 2 13:35:23.796286 kubelet[3001]: I0302 13:35:23.791391 3001 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:35:23.801061 kubelet[3001]: I0302 13:35:23.799064 3001 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 2 13:35:23.807941 kubelet[3001]: I0302 13:35:23.805527 3001 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:35:23.831374 kubelet[3001]: I0302 13:35:23.831342 3001 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 13:35:23.831808 kubelet[3001]: I0302 13:35:23.831785 3001 server.go:1289] "Started kubelet" Mar 2 13:35:23.884377 kubelet[3001]: I0302 13:35:23.838703 3001 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:35:23.890627 kubelet[3001]: I0302 13:35:23.838767 3001 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:35:23.890627 kubelet[3001]: I0302 13:35:23.887007 3001 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:35:23.890627 kubelet[3001]: I0302 13:35:23.848804 3001 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:35:23.890627 kubelet[3001]: I0302 13:35:23.848706 3001 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:35:23.907515 kubelet[3001]: I0302 13:35:23.903397 3001 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:35:23.935002 kubelet[3001]: I0302 13:35:23.934349 3001 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 13:35:23.951343 kubelet[3001]: E0302 13:35:23.949450 3001 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:23.952710 kubelet[3001]: I0302 13:35:23.952295 3001 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 13:35:23.952710 kubelet[3001]: I0302 13:35:23.952538 3001 reconciler.go:26] "Reconciler: start to sync state" Mar 2 13:35:23.962345 kubelet[3001]: I0302 13:35:23.953819 3001 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:35:23.962345 kubelet[3001]: I0302 13:35:23.959950 3001 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:35:24.061988 kubelet[3001]: E0302 13:35:24.056775 3001 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:35:24.118076 kubelet[3001]: E0302 13:35:24.117989 3001 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:35:24.153302 kubelet[3001]: I0302 13:35:24.148309 3001 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:35:24.534141 kubelet[3001]: I0302 13:35:24.534047 3001 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 13:35:24.601727 kubelet[3001]: I0302 13:35:24.601510 3001 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 13:35:24.606040 kubelet[3001]: I0302 13:35:24.606010 3001 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 13:35:24.606248 kubelet[3001]: I0302 13:35:24.606225 3001 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:35:24.606358 kubelet[3001]: I0302 13:35:24.606341 3001 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 13:35:24.606728 kubelet[3001]: E0302 13:35:24.606523 3001 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:35:24.619818 sudo[3034]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 2 13:35:24.620531 sudo[3034]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 2 13:35:24.713345 kubelet[3001]: E0302 13:35:24.711373 3001 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:35:24.813098 kubelet[3001]: I0302 13:35:24.796343 3001 apiserver.go:52] "Watching apiserver" Mar 2 13:35:24.912674 kubelet[3001]: E0302 13:35:24.912537 3001 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:35:24.924997 kubelet[3001]: I0302 13:35:24.924706 3001 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:35:24.925915 kubelet[3001]: I0302 13:35:24.925808 3001 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:35:24.927266 kubelet[3001]: I0302 13:35:24.926088 3001 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:35:24.927266 kubelet[3001]: I0302 13:35:24.926775 3001 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 13:35:24.927266 kubelet[3001]: I0302 13:35:24.926796 3001 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 13:35:24.927266 kubelet[3001]: I0302 13:35:24.926826 3001 policy_none.go:49] "None policy: Start" Mar 2 13:35:24.927266 kubelet[3001]: I0302 13:35:24.926921 3001 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 13:35:24.927266 kubelet[3001]: I0302 13:35:24.926941 3001 state_mem.go:35] "Initializing new in-memory state store" Mar 2 13:35:24.927266 kubelet[3001]: I0302 13:35:24.927152 3001 state_mem.go:75] "Updated machine memory state" Mar 2 13:35:25.016223 kubelet[3001]: E0302 13:35:25.013044 3001 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:35:25.016223 kubelet[3001]: I0302 13:35:25.013358 3001 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:35:25.016223 kubelet[3001]: I0302 13:35:25.013374 3001 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:35:25.016223 kubelet[3001]: I0302 13:35:25.016025 3001 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:35:25.033035 kubelet[3001]: E0302 13:35:25.029814 3001 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:35:25.222410 kubelet[3001]: I0302 13:35:25.222176 3001 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:35:25.360356 kubelet[3001]: I0302 13:35:25.356403 3001 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:35:25.360356 kubelet[3001]: I0302 13:35:25.356408 3001 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:35:25.360356 kubelet[3001]: I0302 13:35:25.360341 3001 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:35:25.397746 kubelet[3001]: I0302 13:35:25.389747 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:35:25.397746 kubelet[3001]: I0302 13:35:25.390089 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:35:25.397746 kubelet[3001]: I0302 13:35:25.390336 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c3a8945630faa7859e98ee66162bf89-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c3a8945630faa7859e98ee66162bf89\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:35:25.397746 kubelet[3001]: I0302 13:35:25.390403 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c3a8945630faa7859e98ee66162bf89-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c3a8945630faa7859e98ee66162bf89\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:35:25.397746 kubelet[3001]: I0302 13:35:25.390424 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c3a8945630faa7859e98ee66162bf89-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c3a8945630faa7859e98ee66162bf89\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:35:25.398509 kubelet[3001]: I0302 13:35:25.390444 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:35:25.398509 kubelet[3001]: I0302 13:35:25.390461 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:35:25.401993 kubelet[3001]: I0302 13:35:25.390478 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:35:25.556031 kubelet[3001]: I0302 13:35:25.538832 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:35:25.587975 kubelet[3001]: I0302 13:35:25.587904 3001 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 13:35:25.716751 kubelet[3001]: E0302 13:35:25.714817 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:25.727958 kubelet[3001]: E0302 13:35:25.727089 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:25.736669 kubelet[3001]: I0302 13:35:25.728825 3001 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 2 13:35:25.737396 kubelet[3001]: I0302 13:35:25.737369 3001 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 13:35:25.737916 kubelet[3001]: E0302 13:35:25.734175 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:26.372521 kubelet[3001]: E0302 13:35:26.364368 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:26.417093 kubelet[3001]: E0302 13:35:26.391248 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:26.423008 kubelet[3001]: E0302 13:35:26.422766 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:28.149703 kubelet[3001]: E0302 13:35:28.136173 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:28.157273 kubelet[3001]: E0302 13:35:28.147699 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:29.327720 kubelet[3001]: E0302 13:35:29.327136 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:30.594816 kubelet[3001]: I0302 13:35:30.594158 3001 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:35:30.601959 containerd[1564]: time="2026-03-02T13:35:30.595295882Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:35:30.604201 kubelet[3001]: I0302 13:35:30.595855 3001 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:35:32.117770 kubelet[3001]: E0302 13:35:32.109988 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:33.187953 kubelet[3001]: I0302 13:35:33.186050 3001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.185865189 podStartE2EDuration="8.185865189s" podCreationTimestamp="2026-03-02 13:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:35:32.738505438 +0000 UTC m=+9.793187658" watchObservedRunningTime="2026-03-02 13:35:33.185865189 +0000 UTC m=+10.240547379" Mar 2 13:35:33.361490 kubelet[3001]: E0302 13:35:33.360949 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:33.943214 kubelet[3001]: E0302 13:35:33.941723 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:34.619063 kubelet[3001]: E0302 13:35:34.617349 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:35.440315 kubelet[3001]: I0302 13:35:35.432067 3001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=10.432037426 podStartE2EDuration="10.432037426s" podCreationTimestamp="2026-03-02 13:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:35:33.186869003 +0000 UTC m=+10.241551233" watchObservedRunningTime="2026-03-02 13:35:35.432037426 +0000 UTC m=+12.486719615" Mar 2 13:35:35.627443 kubelet[3001]: E0302 13:35:35.627377 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:36.230208 sudo[3034]: pam_unix(sudo:session): session closed for user root Mar 2 13:35:36.810692 kubelet[3001]: I0302 13:35:36.801301 3001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=11.80127606 podStartE2EDuration="11.80127606s" podCreationTimestamp="2026-03-02 13:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:35:35.963089732 +0000 UTC m=+13.017771922" watchObservedRunningTime="2026-03-02 13:35:36.80127606 +0000 UTC m=+13.855958270" Mar 2 13:35:37.704511 kubelet[3001]: I0302 13:35:37.619314 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4d963feb-c47d-44d7-9fc8-5bec5108db70-kube-proxy\") pod \"kube-proxy-mm7dw\" (UID: \"4d963feb-c47d-44d7-9fc8-5bec5108db70\") " pod="kube-system/kube-proxy-mm7dw" Mar 2 13:35:37.704511 kubelet[3001]: I0302 13:35:37.703445 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d963feb-c47d-44d7-9fc8-5bec5108db70-xtables-lock\") pod \"kube-proxy-mm7dw\" (UID: \"4d963feb-c47d-44d7-9fc8-5bec5108db70\") " pod="kube-system/kube-proxy-mm7dw" Mar 2 13:35:37.704511 kubelet[3001]: I0302 13:35:37.703984 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdnsd\" (UniqueName: \"kubernetes.io/projected/4d963feb-c47d-44d7-9fc8-5bec5108db70-kube-api-access-sdnsd\") pod \"kube-proxy-mm7dw\" (UID: \"4d963feb-c47d-44d7-9fc8-5bec5108db70\") " pod="kube-system/kube-proxy-mm7dw" Mar 2 13:35:37.704511 kubelet[3001]: I0302 13:35:37.704126 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d963feb-c47d-44d7-9fc8-5bec5108db70-lib-modules\") pod \"kube-proxy-mm7dw\" (UID: \"4d963feb-c47d-44d7-9fc8-5bec5108db70\") " pod="kube-system/kube-proxy-mm7dw" Mar 2 13:35:37.819834 systemd[1]: Created slice kubepods-besteffort-pod4d963feb_c47d_44d7_9fc8_5bec5108db70.slice - libcontainer container kubepods-besteffort-pod4d963feb_c47d_44d7_9fc8_5bec5108db70.slice. Mar 2 13:35:38.227075 kubelet[3001]: E0302 13:35:38.226876 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:38.333251 containerd[1564]: time="2026-03-02T13:35:38.329887680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mm7dw,Uid:4d963feb-c47d-44d7-9fc8-5bec5108db70,Namespace:kube-system,Attempt:0,}" Mar 2 13:35:38.631784 containerd[1564]: time="2026-03-02T13:35:38.630984023Z" level=info msg="connecting to shim aa4c761db3cc1e40b084eb8d2401a1390da4bfa7a8b086e7842c4a00967d7f39" address="unix:///run/containerd/s/30e58dd54dab1f3bafcf2a938d79ee3dc840c165e61da4c543e7951419aa1cd5" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:35:39.098517 systemd[1]: Started cri-containerd-aa4c761db3cc1e40b084eb8d2401a1390da4bfa7a8b086e7842c4a00967d7f39.scope - libcontainer container aa4c761db3cc1e40b084eb8d2401a1390da4bfa7a8b086e7842c4a00967d7f39. Mar 2 13:35:40.940518 containerd[1564]: time="2026-03-02T13:35:40.940367387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mm7dw,Uid:4d963feb-c47d-44d7-9fc8-5bec5108db70,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa4c761db3cc1e40b084eb8d2401a1390da4bfa7a8b086e7842c4a00967d7f39\"" Mar 2 13:35:40.964536 kubelet[3001]: E0302 13:35:40.952210 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:41.014189 containerd[1564]: time="2026-03-02T13:35:41.013085350Z" level=info msg="CreateContainer within sandbox \"aa4c761db3cc1e40b084eb8d2401a1390da4bfa7a8b086e7842c4a00967d7f39\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:35:41.479767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721340143.mount: Deactivated successfully. Mar 2 13:35:41.530727 containerd[1564]: time="2026-03-02T13:35:41.530504342Z" level=info msg="Container a2925fc3fdaf1c0d59af50b47ed6ccc3133d086cbe560626833c5ed32e8cb564: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:35:41.663235 containerd[1564]: time="2026-03-02T13:35:41.661064081Z" level=info msg="CreateContainer within sandbox \"aa4c761db3cc1e40b084eb8d2401a1390da4bfa7a8b086e7842c4a00967d7f39\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a2925fc3fdaf1c0d59af50b47ed6ccc3133d086cbe560626833c5ed32e8cb564\"" Mar 2 13:35:41.721470 containerd[1564]: time="2026-03-02T13:35:41.719545158Z" level=info msg="StartContainer for \"a2925fc3fdaf1c0d59af50b47ed6ccc3133d086cbe560626833c5ed32e8cb564\"" Mar 2 13:35:41.799277 containerd[1564]: time="2026-03-02T13:35:41.799091173Z" level=info msg="connecting to shim a2925fc3fdaf1c0d59af50b47ed6ccc3133d086cbe560626833c5ed32e8cb564" address="unix:///run/containerd/s/30e58dd54dab1f3bafcf2a938d79ee3dc840c165e61da4c543e7951419aa1cd5" protocol=ttrpc version=3 Mar 2 13:35:42.204888 systemd[1]: Started cri-containerd-a2925fc3fdaf1c0d59af50b47ed6ccc3133d086cbe560626833c5ed32e8cb564.scope - libcontainer container a2925fc3fdaf1c0d59af50b47ed6ccc3133d086cbe560626833c5ed32e8cb564. Mar 2 13:35:44.117337 containerd[1564]: time="2026-03-02T13:35:44.116832628Z" level=info msg="StartContainer for \"a2925fc3fdaf1c0d59af50b47ed6ccc3133d086cbe560626833c5ed32e8cb564\" returns successfully" Mar 2 13:35:44.477005 kubelet[3001]: E0302 13:35:44.457546 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:44.723742 kubelet[3001]: I0302 13:35:44.718881 3001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mm7dw" podStartSLOduration=9.71885834 podStartE2EDuration="9.71885834s" podCreationTimestamp="2026-03-02 13:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:35:44.701708135 +0000 UTC m=+21.756390396" watchObservedRunningTime="2026-03-02 13:35:44.71885834 +0000 UTC m=+21.773540540" Mar 2 13:35:45.703705 kubelet[3001]: E0302 13:35:45.702066 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:45.710120 kubelet[3001]: I0302 13:35:45.702404 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cilium-run\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.710120 kubelet[3001]: I0302 13:35:45.708909 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-lib-modules\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.710120 kubelet[3001]: I0302 13:35:45.709019 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-xtables-lock\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.710120 kubelet[3001]: I0302 13:35:45.709054 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-bpf-maps\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.703708 systemd[1]: Created slice kubepods-burstable-pod75e718b7_73eb_4c96_86ba_b3f5c425bc53.slice - libcontainer container kubepods-burstable-pod75e718b7_73eb_4c96_86ba_b3f5c425bc53.slice. Mar 2 13:35:45.714088 kubelet[3001]: I0302 13:35:45.711218 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cilium-cgroup\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.714088 kubelet[3001]: I0302 13:35:45.711266 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cni-path\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.714088 kubelet[3001]: I0302 13:35:45.711286 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75e718b7-73eb-4c96-86ba-b3f5c425bc53-clustermesh-secrets\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.714088 kubelet[3001]: I0302 13:35:45.711305 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cilium-config-path\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.714088 kubelet[3001]: I0302 13:35:45.711327 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-host-proc-sys-net\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.714088 kubelet[3001]: I0302 13:35:45.711346 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75e718b7-73eb-4c96-86ba-b3f5c425bc53-hubble-tls\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.714320 kubelet[3001]: I0302 13:35:45.711439 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg6x2\" (UniqueName: \"kubernetes.io/projected/75e718b7-73eb-4c96-86ba-b3f5c425bc53-kube-api-access-vg6x2\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.714320 kubelet[3001]: I0302 13:35:45.711472 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-hostproc\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.714320 kubelet[3001]: I0302 13:35:45.711494 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-etc-cni-netd\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.714320 kubelet[3001]: I0302 13:35:45.711515 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-host-proc-sys-kernel\") pod \"cilium-j52kl\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " pod="kube-system/cilium-j52kl" Mar 2 13:35:45.824062 systemd[1]: Created slice kubepods-besteffort-pod0bc16fa9_9b8c_49ca_9fa7_89c2e1c8a819.slice - libcontainer container kubepods-besteffort-pod0bc16fa9_9b8c_49ca_9fa7_89c2e1c8a819.slice. Mar 2 13:35:45.879511 kubelet[3001]: I0302 13:35:45.856527 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bjp7p\" (UID: \"0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819\") " pod="kube-system/cilium-operator-6c4d7847fc-bjp7p" Mar 2 13:35:45.879511 kubelet[3001]: I0302 13:35:45.856926 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wlz\" (UniqueName: \"kubernetes.io/projected/0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819-kube-api-access-w4wlz\") pod \"cilium-operator-6c4d7847fc-bjp7p\" (UID: \"0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819\") " pod="kube-system/cilium-operator-6c4d7847fc-bjp7p" Mar 2 13:35:46.556540 kubelet[3001]: E0302 13:35:46.556493 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:46.593104 containerd[1564]: time="2026-03-02T13:35:46.590547903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bjp7p,Uid:0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819,Namespace:kube-system,Attempt:0,}" Mar 2 13:35:46.667177 kubelet[3001]: E0302 13:35:46.666355 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:46.688287 containerd[1564]: time="2026-03-02T13:35:46.687543786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j52kl,Uid:75e718b7-73eb-4c96-86ba-b3f5c425bc53,Namespace:kube-system,Attempt:0,}" Mar 2 13:35:48.095296 containerd[1564]: time="2026-03-02T13:35:48.075732528Z" level=info msg="connecting to shim d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d" address="unix:///run/containerd/s/8e714ef65bbef10be3d66f29fb213ef98aaeb55061fbdde9e11427f8c49ba948" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:35:48.261750 containerd[1564]: time="2026-03-02T13:35:48.261546858Z" level=info msg="connecting to shim 322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90" address="unix:///run/containerd/s/31f1a9e9e3d1ac88539b7c97ba7bf90b913d5d4b03cc82f76723c39967693302" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:35:48.997415 systemd[1]: Started cri-containerd-d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d.scope - libcontainer container d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d. Mar 2 13:35:49.092301 systemd[1]: Started cri-containerd-322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90.scope - libcontainer container 322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90. Mar 2 13:35:50.555178 containerd[1564]: time="2026-03-02T13:35:50.554494615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j52kl,Uid:75e718b7-73eb-4c96-86ba-b3f5c425bc53,Namespace:kube-system,Attempt:0,} returns sandbox id \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\"" Mar 2 13:35:50.607327 kubelet[3001]: E0302 13:35:50.601738 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:35:50.709356 containerd[1564]: time="2026-03-02T13:35:50.708767063Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 2 13:35:51.324860 containerd[1564]: time="2026-03-02T13:35:51.324071028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bjp7p,Uid:0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\"" Mar 2 13:35:51.342193 kubelet[3001]: E0302 13:35:51.335840 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:36:06.863924 kubelet[3001]: E0302 13:36:06.860536 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.921s" Mar 2 13:36:30.744908 kubelet[3001]: E0302 13:36:30.744854 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:36:34.616362 kubelet[3001]: E0302 13:36:34.616170 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:36:39.864142 kubelet[3001]: E0302 13:36:39.844847 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.181s" Mar 2 13:36:42.804358 kubelet[3001]: E0302 13:36:42.797182 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.172s" Mar 2 13:36:43.643243 kubelet[3001]: E0302 13:36:43.642541 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:36:50.642794 kubelet[3001]: E0302 13:36:50.637985 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:37:11.298506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463133769.mount: Deactivated successfully. Mar 2 13:37:24.056389 kubelet[3001]: E0302 13:37:24.054418 3001 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Mar 2 13:37:27.652475 kubelet[3001]: E0302 13:37:27.652401 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:37:32.660762 kubelet[3001]: E0302 13:37:32.656436 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:37:34.631664 kubelet[3001]: E0302 13:37:34.631481 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:37:37.670524 kubelet[3001]: E0302 13:37:37.668242 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:37:42.723691 kubelet[3001]: E0302 13:37:42.695548 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:37:47.731901 kubelet[3001]: E0302 13:37:47.731808 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:37:52.078246 containerd[1564]: time="2026-03-02T13:37:52.076527683Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:37:52.111841 containerd[1564]: time="2026-03-02T13:37:52.111407475Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 2 13:37:52.160269 containerd[1564]: time="2026-03-02T13:37:52.160070486Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:37:52.215441 containerd[1564]: time="2026-03-02T13:37:52.207901797Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 2m1.499073691s" Mar 2 13:37:52.215441 containerd[1564]: time="2026-03-02T13:37:52.207963772Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 2 13:37:52.258959 containerd[1564]: time="2026-03-02T13:37:52.258902227Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 2 13:37:52.366774 containerd[1564]: time="2026-03-02T13:37:52.364370311Z" level=info msg="CreateContainer within sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:37:52.546775 containerd[1564]: time="2026-03-02T13:37:52.544389021Z" level=info msg="Container d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:37:52.804315 containerd[1564]: time="2026-03-02T13:37:52.803988273Z" level=info msg="CreateContainer within sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9\"" Mar 2 13:37:52.820725 containerd[1564]: time="2026-03-02T13:37:52.820350896Z" level=info msg="StartContainer for \"d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9\"" Mar 2 13:37:52.825681 containerd[1564]: time="2026-03-02T13:37:52.825019182Z" level=info msg="connecting to shim d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9" address="unix:///run/containerd/s/31f1a9e9e3d1ac88539b7c97ba7bf90b913d5d4b03cc82f76723c39967693302" protocol=ttrpc version=3 Mar 2 13:37:52.842358 kubelet[3001]: E0302 13:37:52.838444 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:37:53.391867 systemd[1]: Started cri-containerd-d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9.scope - libcontainer container d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9. Mar 2 13:37:55.058677 containerd[1564]: time="2026-03-02T13:37:55.057986217Z" level=info msg="StartContainer for \"d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9\" returns successfully" Mar 2 13:37:55.302894 systemd[1]: cri-containerd-d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9.scope: Deactivated successfully. Mar 2 13:37:55.507485 containerd[1564]: time="2026-03-02T13:37:55.505406057Z" level=info msg="received container exit event container_id:\"d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9\" id:\"d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9\" pid:3487 exited_at:{seconds:1772458675 nanos:403865580}" Mar 2 13:37:55.556487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524851093.mount: Deactivated successfully. Mar 2 13:37:56.332541 kubelet[3001]: E0302 13:37:56.298828 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:37:56.429500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9-rootfs.mount: Deactivated successfully. Mar 2 13:37:57.496469 kubelet[3001]: E0302 13:37:57.467467 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:37:57.627669 containerd[1564]: time="2026-03-02T13:37:57.626335575Z" level=info msg="CreateContainer within sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:38:03.358462 containerd[1564]: time="2026-03-02T13:38:03.335822078Z" level=info msg="Container 10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:38:03.383907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3887783711.mount: Deactivated successfully. Mar 2 13:38:03.539707 kubelet[3001]: E0302 13:38:03.534313 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:03.733910 kubelet[3001]: E0302 13:38:03.723802 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:38:03.978043 containerd[1564]: time="2026-03-02T13:38:03.945323916Z" level=info msg="CreateContainer within sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80\"" Mar 2 13:38:03.982544 containerd[1564]: time="2026-03-02T13:38:03.978902868Z" level=info msg="StartContainer for \"10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80\"" Mar 2 13:38:04.007976 containerd[1564]: time="2026-03-02T13:38:03.992087449Z" level=info msg="connecting to shim 10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80" address="unix:///run/containerd/s/31f1a9e9e3d1ac88539b7c97ba7bf90b913d5d4b03cc82f76723c39967693302" protocol=ttrpc version=3 Mar 2 13:38:04.321236 systemd[1]: Started cri-containerd-10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80.scope - libcontainer container 10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80. Mar 2 13:38:05.428336 containerd[1564]: time="2026-03-02T13:38:05.428060999Z" level=info msg="StartContainer for \"10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80\" returns successfully" Mar 2 13:38:05.759404 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:38:05.760102 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:38:05.810944 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:38:05.842376 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:38:05.864489 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 2 13:38:05.876266 systemd[1]: cri-containerd-10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80.scope: Deactivated successfully. Mar 2 13:38:06.009249 containerd[1564]: time="2026-03-02T13:38:06.009101390Z" level=info msg="received container exit event container_id:\"10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80\" id:\"10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80\" pid:3545 exited_at:{seconds:1772458685 nanos:922030514}" Mar 2 13:38:06.318263 kubelet[3001]: E0302 13:38:06.316390 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:06.338431 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:38:07.360798 kubelet[3001]: E0302 13:38:07.353331 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:07.423852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80-rootfs.mount: Deactivated successfully. Mar 2 13:38:07.626231 kubelet[3001]: E0302 13:38:07.620515 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:08.477523 kubelet[3001]: E0302 13:38:08.473257 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:08.562716 containerd[1564]: time="2026-03-02T13:38:08.559894344Z" level=info msg="CreateContainer within sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:38:08.886284 kubelet[3001]: E0302 13:38:08.858949 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:38:09.335519 containerd[1564]: time="2026-03-02T13:38:09.308377337Z" level=info msg="Container 6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:38:09.647420 containerd[1564]: time="2026-03-02T13:38:09.625232629Z" level=info msg="CreateContainer within sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab\"" Mar 2 13:38:09.665005 containerd[1564]: time="2026-03-02T13:38:09.656041100Z" level=info msg="StartContainer for \"6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab\"" Mar 2 13:38:09.713848 containerd[1564]: time="2026-03-02T13:38:09.712005514Z" level=info msg="connecting to shim 6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab" address="unix:///run/containerd/s/31f1a9e9e3d1ac88539b7c97ba7bf90b913d5d4b03cc82f76723c39967693302" protocol=ttrpc version=3 Mar 2 13:38:10.304521 systemd[1]: Started cri-containerd-6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab.scope - libcontainer container 6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab. Mar 2 13:38:11.492818 containerd[1564]: time="2026-03-02T13:38:11.491951256Z" level=info msg="StartContainer for \"6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab\" returns successfully" Mar 2 13:38:11.496021 systemd[1]: cri-containerd-6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab.scope: Deactivated successfully. Mar 2 13:38:11.669113 containerd[1564]: time="2026-03-02T13:38:11.659522428Z" level=info msg="received container exit event container_id:\"6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab\" id:\"6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab\" pid:3596 exited_at:{seconds:1772458691 nanos:626795074}" Mar 2 13:38:11.669474 kubelet[3001]: E0302 13:38:11.661990 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:12.032226 kubelet[3001]: E0302 13:38:12.032085 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:13.026915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab-rootfs.mount: Deactivated successfully. Mar 2 13:38:13.995729 kubelet[3001]: E0302 13:38:13.993969 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:38:15.206206 kubelet[3001]: E0302 13:38:15.205044 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:15.301821 containerd[1564]: time="2026-03-02T13:38:15.295820997Z" level=info msg="CreateContainer within sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:38:15.346450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249189678.mount: Deactivated successfully. Mar 2 13:38:15.526033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3998441052.mount: Deactivated successfully. Mar 2 13:38:15.560478 containerd[1564]: time="2026-03-02T13:38:15.554810748Z" level=info msg="Container 0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:38:15.829015 containerd[1564]: time="2026-03-02T13:38:15.828918817Z" level=info msg="CreateContainer within sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c\"" Mar 2 13:38:15.884843 containerd[1564]: time="2026-03-02T13:38:15.875976781Z" level=info msg="StartContainer for \"0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c\"" Mar 2 13:38:15.927470 containerd[1564]: time="2026-03-02T13:38:15.916395238Z" level=info msg="connecting to shim 0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c" address="unix:///run/containerd/s/31f1a9e9e3d1ac88539b7c97ba7bf90b913d5d4b03cc82f76723c39967693302" protocol=ttrpc version=3 Mar 2 13:38:16.393029 systemd[1]: Started cri-containerd-0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c.scope - libcontainer container 0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c. Mar 2 13:38:16.975224 systemd[1]: cri-containerd-0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c.scope: Deactivated successfully. Mar 2 13:38:17.008419 containerd[1564]: time="2026-03-02T13:38:17.006765643Z" level=info msg="received container exit event container_id:\"0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c\" id:\"0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c\" pid:3638 exited_at:{seconds:1772458696 nanos:995778325}" Mar 2 13:38:17.014849 containerd[1564]: time="2026-03-02T13:38:17.014816271Z" level=info msg="StartContainer for \"0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c\" returns successfully" Mar 2 13:38:17.286950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c-rootfs.mount: Deactivated successfully. Mar 2 13:38:17.612522 kubelet[3001]: E0302 13:38:17.612490 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:17.743950 containerd[1564]: time="2026-03-02T13:38:17.743903633Z" level=info msg="CreateContainer within sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:38:18.129004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3353968481.mount: Deactivated successfully. Mar 2 13:38:18.143445 containerd[1564]: time="2026-03-02T13:38:18.141021358Z" level=info msg="Container 8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:38:18.245793 containerd[1564]: time="2026-03-02T13:38:18.244510516Z" level=info msg="CreateContainer within sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba\"" Mar 2 13:38:18.251325 containerd[1564]: time="2026-03-02T13:38:18.246900698Z" level=info msg="StartContainer for \"8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba\"" Mar 2 13:38:18.307210 containerd[1564]: time="2026-03-02T13:38:18.306419596Z" level=info msg="connecting to shim 8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba" address="unix:///run/containerd/s/31f1a9e9e3d1ac88539b7c97ba7bf90b913d5d4b03cc82f76723c39967693302" protocol=ttrpc version=3 Mar 2 13:38:18.696280 systemd[1]: Started cri-containerd-8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba.scope - libcontainer container 8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba. Mar 2 13:38:19.025827 kubelet[3001]: E0302 13:38:19.021491 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:38:19.078862 containerd[1564]: time="2026-03-02T13:38:19.077890918Z" level=info msg="StartContainer for \"8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba\" returns successfully" Mar 2 13:38:21.973792 kubelet[3001]: E0302 13:38:21.973518 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:22.948760 containerd[1564]: time="2026-03-02T13:38:22.947868743Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:38:22.956404 containerd[1564]: time="2026-03-02T13:38:22.956364551Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 2 13:38:22.974946 containerd[1564]: time="2026-03-02T13:38:22.968521107Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:38:22.991016 containerd[1564]: time="2026-03-02T13:38:22.990798883Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 30.721984573s" Mar 2 13:38:22.991016 containerd[1564]: time="2026-03-02T13:38:22.990927152Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 2 13:38:22.994390 kubelet[3001]: E0302 13:38:22.993042 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:23.089911 containerd[1564]: time="2026-03-02T13:38:23.076894051Z" level=info msg="CreateContainer within sandbox \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 2 13:38:23.357287 containerd[1564]: time="2026-03-02T13:38:23.357229246Z" level=info msg="Container c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:38:23.517169 containerd[1564]: time="2026-03-02T13:38:23.517021991Z" level=info msg="CreateContainer within sandbox \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8\"" Mar 2 13:38:23.530858 containerd[1564]: time="2026-03-02T13:38:23.518435254Z" level=info msg="StartContainer for \"c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8\"" Mar 2 13:38:23.530858 containerd[1564]: time="2026-03-02T13:38:23.526915011Z" level=info msg="connecting to shim c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8" address="unix:///run/containerd/s/8e714ef65bbef10be3d66f29fb213ef98aaeb55061fbdde9e11427f8c49ba948" protocol=ttrpc version=3 Mar 2 13:38:23.741253 systemd[1]: Started cri-containerd-c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8.scope - libcontainer container c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8. Mar 2 13:38:24.660348 containerd[1564]: time="2026-03-02T13:38:24.658447741Z" level=info msg="StartContainer for \"c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8\" returns successfully" Mar 2 13:38:25.264856 kubelet[3001]: E0302 13:38:25.212315 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:25.931278 kubelet[3001]: I0302 13:38:25.927775 3001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j52kl" podStartSLOduration=39.37115205 podStartE2EDuration="2m40.927756529s" podCreationTimestamp="2026-03-02 13:35:45 +0000 UTC" firstStartedPulling="2026-03-02 13:35:50.691365975 +0000 UTC m=+27.746048165" lastFinishedPulling="2026-03-02 13:37:52.247970434 +0000 UTC m=+149.302652644" observedRunningTime="2026-03-02 13:38:22.258049582 +0000 UTC m=+179.312731802" watchObservedRunningTime="2026-03-02 13:38:25.927756529 +0000 UTC m=+182.982438720" Mar 2 13:38:26.186782 kubelet[3001]: E0302 13:38:26.185440 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:26.428834 kubelet[3001]: I0302 13:38:26.428442 3001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bjp7p" podStartSLOduration=9.851615514 podStartE2EDuration="2m41.42841766s" podCreationTimestamp="2026-03-02 13:35:45 +0000 UTC" firstStartedPulling="2026-03-02 13:35:51.42453002 +0000 UTC m=+28.479212210" lastFinishedPulling="2026-03-02 13:38:23.001332156 +0000 UTC m=+180.056014356" observedRunningTime="2026-03-02 13:38:25.928719943 +0000 UTC m=+182.983402153" watchObservedRunningTime="2026-03-02 13:38:26.42841766 +0000 UTC m=+183.483099860" Mar 2 13:38:26.551165 systemd[1]: Created slice kubepods-burstable-pod99d29702_8a17_4537_af77_db9697b15fa4.slice - libcontainer container kubepods-burstable-pod99d29702_8a17_4537_af77_db9697b15fa4.slice. Mar 2 13:38:26.562979 kubelet[3001]: I0302 13:38:26.560492 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99d29702-8a17-4537-af77-db9697b15fa4-config-volume\") pod \"coredns-674b8bbfcf-hwmqr\" (UID: \"99d29702-8a17-4537-af77-db9697b15fa4\") " pod="kube-system/coredns-674b8bbfcf-hwmqr" Mar 2 13:38:26.562979 kubelet[3001]: I0302 13:38:26.560533 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqdtf\" (UniqueName: \"kubernetes.io/projected/99d29702-8a17-4537-af77-db9697b15fa4-kube-api-access-nqdtf\") pod \"coredns-674b8bbfcf-hwmqr\" (UID: \"99d29702-8a17-4537-af77-db9697b15fa4\") " pod="kube-system/coredns-674b8bbfcf-hwmqr" Mar 2 13:38:26.780944 kubelet[3001]: I0302 13:38:26.780901 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2142309-6258-4a44-a2d6-c66a18b48f65-config-volume\") pod \"coredns-674b8bbfcf-tfzv8\" (UID: \"d2142309-6258-4a44-a2d6-c66a18b48f65\") " pod="kube-system/coredns-674b8bbfcf-tfzv8" Mar 2 13:38:26.781263 kubelet[3001]: I0302 13:38:26.781237 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzjwn\" (UniqueName: \"kubernetes.io/projected/d2142309-6258-4a44-a2d6-c66a18b48f65-kube-api-access-xzjwn\") pod \"coredns-674b8bbfcf-tfzv8\" (UID: \"d2142309-6258-4a44-a2d6-c66a18b48f65\") " pod="kube-system/coredns-674b8bbfcf-tfzv8" Mar 2 13:38:26.957386 systemd[1]: Created slice kubepods-burstable-podd2142309_6258_4a44_a2d6_c66a18b48f65.slice - libcontainer container kubepods-burstable-podd2142309_6258_4a44_a2d6_c66a18b48f65.slice. Mar 2 13:38:27.614728 kubelet[3001]: E0302 13:38:27.610510 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:27.708296 containerd[1564]: time="2026-03-02T13:38:27.708240508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hwmqr,Uid:99d29702-8a17-4537-af77-db9697b15fa4,Namespace:kube-system,Attempt:0,}" Mar 2 13:38:27.833878 kubelet[3001]: E0302 13:38:27.825994 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:28.056520 containerd[1564]: time="2026-03-02T13:38:28.033828678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tfzv8,Uid:d2142309-6258-4a44-a2d6-c66a18b48f65,Namespace:kube-system,Attempt:0,}" Mar 2 13:38:39.977837 kubelet[3001]: E0302 13:38:39.969977 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:41.445803 systemd-networkd[1454]: cilium_host: Link UP Mar 2 13:38:41.448738 systemd-networkd[1454]: cilium_net: Link UP Mar 2 13:38:41.449144 systemd-networkd[1454]: cilium_net: Gained carrier Mar 2 13:38:41.449409 systemd-networkd[1454]: cilium_host: Gained carrier Mar 2 13:38:42.298741 systemd-networkd[1454]: cilium_host: Gained IPv6LL Mar 2 13:38:42.429311 systemd-networkd[1454]: cilium_net: Gained IPv6LL Mar 2 13:38:44.649830 kubelet[3001]: E0302 13:38:44.644527 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:38:45.929458 systemd-networkd[1454]: cilium_vxlan: Link UP Mar 2 13:38:45.929475 systemd-networkd[1454]: cilium_vxlan: Gained carrier Mar 2 13:38:47.404462 systemd-networkd[1454]: cilium_vxlan: Gained IPv6LL Mar 2 13:38:50.075834 kernel: NET: Registered PF_ALG protocol family Mar 2 13:38:50.925889 containerd[1564]: time="2026-03-02T13:38:50.721947437Z" level=warning msg="container event discarded" container=6278eededc694c0639d8d529c95c5abda60d7412d5bfc221feeab4f6b69ca4c8 type=CONTAINER_CREATED_EVENT Mar 2 13:38:51.252891 containerd[1564]: time="2026-03-02T13:38:51.251420861Z" level=warning msg="container event discarded" container=6278eededc694c0639d8d529c95c5abda60d7412d5bfc221feeab4f6b69ca4c8 type=CONTAINER_STARTED_EVENT Mar 2 13:38:51.252891 containerd[1564]: time="2026-03-02T13:38:51.251484910Z" level=warning msg="container event discarded" container=17324997985f371d2e38bb3f47ea6eaad5b856261dcaa2c8b417b05757af9a5c type=CONTAINER_CREATED_EVENT Mar 2 13:38:51.252891 containerd[1564]: time="2026-03-02T13:38:51.251502894Z" level=warning msg="container event discarded" container=17324997985f371d2e38bb3f47ea6eaad5b856261dcaa2c8b417b05757af9a5c type=CONTAINER_STARTED_EVENT Mar 2 13:38:51.252891 containerd[1564]: time="2026-03-02T13:38:51.251514657Z" level=warning msg="container event discarded" container=38a3886f20a31af6fe26d7af20875d31dbc271030a022154731904a2aec9df3e type=CONTAINER_CREATED_EVENT Mar 2 13:38:51.286778 containerd[1564]: time="2026-03-02T13:38:51.283999565Z" level=warning msg="container event discarded" container=84ad3ea4bbc0bb53ff847c5f69fe1e73b875421325a49f2281e7ce5ccc7862da type=CONTAINER_CREATED_EVENT Mar 2 13:38:51.286778 containerd[1564]: time="2026-03-02T13:38:51.284240133Z" level=warning msg="container event discarded" container=84ad3ea4bbc0bb53ff847c5f69fe1e73b875421325a49f2281e7ce5ccc7862da type=CONTAINER_STARTED_EVENT Mar 2 13:38:51.350162 containerd[1564]: time="2026-03-02T13:38:51.349831553Z" level=warning msg="container event discarded" container=aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60 type=CONTAINER_CREATED_EVENT Mar 2 13:38:51.684992 containerd[1564]: time="2026-03-02T13:38:51.678868563Z" level=warning msg="container event discarded" container=2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1 type=CONTAINER_CREATED_EVENT Mar 2 13:38:52.511490 containerd[1564]: time="2026-03-02T13:38:52.511411607Z" level=warning msg="container event discarded" container=aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60 type=CONTAINER_STARTED_EVENT Mar 2 13:38:52.519999 containerd[1564]: time="2026-03-02T13:38:52.519908175Z" level=warning msg="container event discarded" container=38a3886f20a31af6fe26d7af20875d31dbc271030a022154731904a2aec9df3e type=CONTAINER_STARTED_EVENT Mar 2 13:38:52.915264 containerd[1564]: time="2026-03-02T13:38:52.909435860Z" level=warning msg="container event discarded" container=2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1 type=CONTAINER_STARTED_EVENT Mar 2 13:39:06.396786 systemd-networkd[1454]: lxc_health: Link UP Mar 2 13:39:06.398362 systemd-networkd[1454]: lxc_health: Gained carrier Mar 2 13:39:06.786348 kubelet[3001]: E0302 13:39:06.781915 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:07.613413 kubelet[3001]: E0302 13:39:07.610864 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:07.916921 containerd[1564]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Mar 2 13:39:07.916921 containerd[1564]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Mar 2 13:39:07.936856 systemd[1]: run-netns-cni\x2d440ca032\x2d4a1c\x2d47b4\x2d2171\x2d2fa023a1f1db.mount: Deactivated successfully. Mar 2 13:39:07.937910 systemd[1]: run-netns-cni\x2df3ea0ba8\x2d18e1\x2dbc73\x2d10a9\x2d2ae408f5f9b0.mount: Deactivated successfully. Mar 2 13:39:07.943263 containerd[1564]: time="2026-03-02T13:39:07.941799218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tfzv8,Uid:d2142309-6258-4a44-a2d6-c66a18b48f65,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f5205209fd1f41099ab46f16dafdbbb0712a41e058232f89036abdaec7572d4\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Mar 2 13:39:07.943824 kubelet[3001]: E0302 13:39:07.943460 3001 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 2 13:39:07.943824 kubelet[3001]: rpc error: code = Unknown desc = failed to setup network for sandbox "3f5205209fd1f41099ab46f16dafdbbb0712a41e058232f89036abdaec7572d4": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Mar 2 13:39:07.943824 kubelet[3001]: Is the agent running? Mar 2 13:39:07.943824 kubelet[3001]: > Mar 2 13:39:07.944435 kubelet[3001]: E0302 13:39:07.943832 3001 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Mar 2 13:39:07.944435 kubelet[3001]: rpc error: code = Unknown desc = failed to setup network for sandbox "3f5205209fd1f41099ab46f16dafdbbb0712a41e058232f89036abdaec7572d4": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Mar 2 13:39:07.944435 kubelet[3001]: Is the agent running? Mar 2 13:39:07.944435 kubelet[3001]: > pod="kube-system/coredns-674b8bbfcf-tfzv8" Mar 2 13:39:07.945988 kubelet[3001]: E0302 13:39:07.943856 3001 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Mar 2 13:39:07.945988 kubelet[3001]: rpc error: code = Unknown desc = failed to setup network for sandbox "3f5205209fd1f41099ab46f16dafdbbb0712a41e058232f89036abdaec7572d4": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Mar 2 13:39:07.945988 kubelet[3001]: Is the agent running? Mar 2 13:39:07.945988 kubelet[3001]: > pod="kube-system/coredns-674b8bbfcf-tfzv8" Mar 2 13:39:07.948994 kubelet[3001]: E0302 13:39:07.948877 3001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tfzv8_kube-system(d2142309-6258-4a44-a2d6-c66a18b48f65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tfzv8_kube-system(d2142309-6258-4a44-a2d6-c66a18b48f65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f5205209fd1f41099ab46f16dafdbbb0712a41e058232f89036abdaec7572d4\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-674b8bbfcf-tfzv8" podUID="d2142309-6258-4a44-a2d6-c66a18b48f65" Mar 2 13:39:07.961303 containerd[1564]: time="2026-03-02T13:39:07.961227601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hwmqr,Uid:99d29702-8a17-4537-af77-db9697b15fa4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"edd60cb370b4c102b292943ade286cbecdfeb5d268a6e64ae5cd41634b884f3d\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Mar 2 13:39:07.967154 kubelet[3001]: E0302 13:39:07.967008 3001 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 2 13:39:07.967154 kubelet[3001]: rpc error: code = Unknown desc = failed to setup network for sandbox "edd60cb370b4c102b292943ade286cbecdfeb5d268a6e64ae5cd41634b884f3d": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Mar 2 13:39:07.967154 kubelet[3001]: Is the agent running? Mar 2 13:39:07.967154 kubelet[3001]: > Mar 2 13:39:07.969292 kubelet[3001]: E0302 13:39:07.967999 3001 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Mar 2 13:39:07.969292 kubelet[3001]: rpc error: code = Unknown desc = failed to setup network for sandbox "edd60cb370b4c102b292943ade286cbecdfeb5d268a6e64ae5cd41634b884f3d": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Mar 2 13:39:07.969292 kubelet[3001]: Is the agent running? Mar 2 13:39:07.969292 kubelet[3001]: > pod="kube-system/coredns-674b8bbfcf-hwmqr" Mar 2 13:39:07.969292 kubelet[3001]: E0302 13:39:07.968139 3001 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Mar 2 13:39:07.969292 kubelet[3001]: rpc error: code = Unknown desc = failed to setup network for sandbox "edd60cb370b4c102b292943ade286cbecdfeb5d268a6e64ae5cd41634b884f3d": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Mar 2 13:39:07.969292 kubelet[3001]: Is the agent running? Mar 2 13:39:07.969292 kubelet[3001]: > pod="kube-system/coredns-674b8bbfcf-hwmqr" Mar 2 13:39:07.969787 kubelet[3001]: E0302 13:39:07.968202 3001 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hwmqr_kube-system(99d29702-8a17-4537-af77-db9697b15fa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hwmqr_kube-system(99d29702-8a17-4537-af77-db9697b15fa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edd60cb370b4c102b292943ade286cbecdfeb5d268a6e64ae5cd41634b884f3d\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-674b8bbfcf-hwmqr" podUID="99d29702-8a17-4537-af77-db9697b15fa4" Mar 2 13:39:08.081885 systemd-networkd[1454]: lxc_health: Gained IPv6LL Mar 2 13:39:13.617759 kubelet[3001]: E0302 13:39:13.615864 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:14.363764 kubelet[3001]: I0302 13:39:14.363501 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqdtf\" (UniqueName: \"kubernetes.io/projected/99d29702-8a17-4537-af77-db9697b15fa4-kube-api-access-nqdtf\") pod \"99d29702-8a17-4537-af77-db9697b15fa4\" (UID: \"99d29702-8a17-4537-af77-db9697b15fa4\") " Mar 2 13:39:14.364726 kubelet[3001]: I0302 13:39:14.364292 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99d29702-8a17-4537-af77-db9697b15fa4-config-volume\") pod \"99d29702-8a17-4537-af77-db9697b15fa4\" (UID: \"99d29702-8a17-4537-af77-db9697b15fa4\") " Mar 2 13:39:14.388547 kubelet[3001]: I0302 13:39:14.374946 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99d29702-8a17-4537-af77-db9697b15fa4-config-volume" (OuterVolumeSpecName: "config-volume") pod "99d29702-8a17-4537-af77-db9697b15fa4" (UID: "99d29702-8a17-4537-af77-db9697b15fa4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:39:14.466495 kubelet[3001]: I0302 13:39:14.466227 3001 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99d29702-8a17-4537-af77-db9697b15fa4-config-volume\") on node \"localhost\" DevicePath \"\"" Mar 2 13:39:14.826509 systemd[1]: var-lib-kubelet-pods-99d29702\x2d8a17\x2d4537\x2daf77\x2ddb9697b15fa4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnqdtf.mount: Deactivated successfully. Mar 2 13:39:14.864388 kubelet[3001]: I0302 13:39:14.864339 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99d29702-8a17-4537-af77-db9697b15fa4-kube-api-access-nqdtf" (OuterVolumeSpecName: "kube-api-access-nqdtf") pod "99d29702-8a17-4537-af77-db9697b15fa4" (UID: "99d29702-8a17-4537-af77-db9697b15fa4"). InnerVolumeSpecName "kube-api-access-nqdtf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:39:14.906464 kubelet[3001]: I0302 13:39:14.904484 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2142309-6258-4a44-a2d6-c66a18b48f65-config-volume\") pod \"d2142309-6258-4a44-a2d6-c66a18b48f65\" (UID: \"d2142309-6258-4a44-a2d6-c66a18b48f65\") " Mar 2 13:39:14.906868 kubelet[3001]: I0302 13:39:14.906838 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzjwn\" (UniqueName: \"kubernetes.io/projected/d2142309-6258-4a44-a2d6-c66a18b48f65-kube-api-access-xzjwn\") pod \"d2142309-6258-4a44-a2d6-c66a18b48f65\" (UID: \"d2142309-6258-4a44-a2d6-c66a18b48f65\") " Mar 2 13:39:14.907021 kubelet[3001]: I0302 13:39:14.906999 3001 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nqdtf\" (UniqueName: \"kubernetes.io/projected/99d29702-8a17-4537-af77-db9697b15fa4-kube-api-access-nqdtf\") on node \"localhost\" DevicePath \"\"" Mar 2 13:39:14.945773 kubelet[3001]: I0302 13:39:14.945526 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2142309-6258-4a44-a2d6-c66a18b48f65-config-volume" (OuterVolumeSpecName: "config-volume") pod "d2142309-6258-4a44-a2d6-c66a18b48f65" (UID: "d2142309-6258-4a44-a2d6-c66a18b48f65"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:39:15.048779 kubelet[3001]: I0302 13:39:15.048548 3001 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2142309-6258-4a44-a2d6-c66a18b48f65-config-volume\") on node \"localhost\" DevicePath \"\"" Mar 2 13:39:15.219545 kubelet[3001]: I0302 13:39:15.201963 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2142309-6258-4a44-a2d6-c66a18b48f65-kube-api-access-xzjwn" (OuterVolumeSpecName: "kube-api-access-xzjwn") pod "d2142309-6258-4a44-a2d6-c66a18b48f65" (UID: "d2142309-6258-4a44-a2d6-c66a18b48f65"). InnerVolumeSpecName "kube-api-access-xzjwn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:39:15.244750 systemd[1]: var-lib-kubelet-pods-d2142309\x2d6258\x2d4a44\x2da2d6\x2dc66a18b48f65-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxzjwn.mount: Deactivated successfully. Mar 2 13:39:15.297343 kubelet[3001]: I0302 13:39:15.297288 3001 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xzjwn\" (UniqueName: \"kubernetes.io/projected/d2142309-6258-4a44-a2d6-c66a18b48f65-kube-api-access-xzjwn\") on node \"localhost\" DevicePath \"\"" Mar 2 13:39:15.318915 systemd[1]: Removed slice kubepods-burstable-pod99d29702_8a17_4537_af77_db9697b15fa4.slice - libcontainer container kubepods-burstable-pod99d29702_8a17_4537_af77_db9697b15fa4.slice. Mar 2 13:39:15.509335 kubelet[3001]: I0302 13:39:15.505502 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c58be385-4c69-4909-846c-3f79600699be-config-volume\") pod \"coredns-674b8bbfcf-xvbbf\" (UID: \"c58be385-4c69-4909-846c-3f79600699be\") " pod="kube-system/coredns-674b8bbfcf-xvbbf" Mar 2 13:39:15.509768 kubelet[3001]: I0302 13:39:15.509542 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgf85\" (UniqueName: \"kubernetes.io/projected/c58be385-4c69-4909-846c-3f79600699be-kube-api-access-vgf85\") pod \"coredns-674b8bbfcf-xvbbf\" (UID: \"c58be385-4c69-4909-846c-3f79600699be\") " pod="kube-system/coredns-674b8bbfcf-xvbbf" Mar 2 13:39:15.535972 systemd[1]: Created slice kubepods-burstable-podc58be385_4c69_4909_846c_3f79600699be.slice - libcontainer container kubepods-burstable-podc58be385_4c69_4909_846c_3f79600699be.slice. Mar 2 13:39:15.714445 systemd[1]: Removed slice kubepods-burstable-podd2142309_6258_4a44_a2d6_c66a18b48f65.slice - libcontainer container kubepods-burstable-podd2142309_6258_4a44_a2d6_c66a18b48f65.slice. Mar 2 13:39:16.293938 kubelet[3001]: E0302 13:39:16.293888 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:16.455406 containerd[1564]: time="2026-03-02T13:39:16.455339040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xvbbf,Uid:c58be385-4c69-4909-846c-3f79600699be,Namespace:kube-system,Attempt:0,}" Mar 2 13:39:16.631285 systemd[1]: Created slice kubepods-burstable-pode47bd08f_5321_414d_a2eb_26fcfaced446.slice - libcontainer container kubepods-burstable-pode47bd08f_5321_414d_a2eb_26fcfaced446.slice. Mar 2 13:39:16.836767 kubelet[3001]: I0302 13:39:16.836282 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e47bd08f-5321-414d-a2eb-26fcfaced446-config-volume\") pod \"coredns-674b8bbfcf-6tmn7\" (UID: \"e47bd08f-5321-414d-a2eb-26fcfaced446\") " pod="kube-system/coredns-674b8bbfcf-6tmn7" Mar 2 13:39:16.910972 kubelet[3001]: I0302 13:39:16.890799 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w92fp\" (UniqueName: \"kubernetes.io/projected/e47bd08f-5321-414d-a2eb-26fcfaced446-kube-api-access-w92fp\") pod \"coredns-674b8bbfcf-6tmn7\" (UID: \"e47bd08f-5321-414d-a2eb-26fcfaced446\") " pod="kube-system/coredns-674b8bbfcf-6tmn7" Mar 2 13:39:18.002961 kubelet[3001]: E0302 13:39:18.002355 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:18.031357 containerd[1564]: time="2026-03-02T13:39:18.020963121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6tmn7,Uid:e47bd08f-5321-414d-a2eb-26fcfaced446,Namespace:kube-system,Attempt:0,}" Mar 2 13:39:18.400271 systemd-networkd[1454]: lxc6379c9a4c865: Link UP Mar 2 13:39:18.522757 kernel: eth0: renamed from tmp1e211 Mar 2 13:39:18.646294 systemd-networkd[1454]: lxc6379c9a4c865: Gained carrier Mar 2 13:39:18.990787 kubelet[3001]: I0302 13:39:18.990393 3001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99d29702-8a17-4537-af77-db9697b15fa4" path="/var/lib/kubelet/pods/99d29702-8a17-4537-af77-db9697b15fa4/volumes" Mar 2 13:39:19.028805 kubelet[3001]: I0302 13:39:19.028534 3001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2142309-6258-4a44-a2d6-c66a18b48f65" path="/var/lib/kubelet/pods/d2142309-6258-4a44-a2d6-c66a18b48f65/volumes" Mar 2 13:39:19.306890 systemd-networkd[1454]: lxc9bbadec586a3: Link UP Mar 2 13:39:19.422952 kernel: eth0: renamed from tmpf8e74 Mar 2 13:39:19.498815 systemd-networkd[1454]: lxc9bbadec586a3: Gained carrier Mar 2 13:39:20.756862 systemd-networkd[1454]: lxc6379c9a4c865: Gained IPv6LL Mar 2 13:39:21.206795 systemd-networkd[1454]: lxc9bbadec586a3: Gained IPv6LL Mar 2 13:39:21.657288 kubelet[3001]: E0302 13:39:21.652375 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:27.184802 sudo[1797]: pam_unix(sudo:session): session closed for user root Mar 2 13:39:27.210891 sshd[1796]: Connection closed by 10.0.0.1 port 49436 Mar 2 13:39:27.228708 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Mar 2 13:39:27.286771 systemd[1]: sshd@8-10.0.0.75:22-10.0.0.1:49436.service: Deactivated successfully. Mar 2 13:39:27.318867 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:39:27.323209 systemd[1]: session-9.scope: Consumed 29.760s CPU time, 238.9M memory peak. Mar 2 13:39:27.395997 systemd-logind[1541]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:39:27.439451 systemd-logind[1541]: Removed session 9. Mar 2 13:39:33.618247 kubelet[3001]: E0302 13:39:33.616992 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:34.613258 kubelet[3001]: E0302 13:39:34.613211 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:40.245785 containerd[1564]: time="2026-03-02T13:39:40.244318249Z" level=info msg="connecting to shim 1e21185799432e708095605c460b6a534bbbb1a40cfb4d30f70b64fcd887864b" address="unix:///run/containerd/s/dc170dce0f436d6e4388f4ab3692553d92d0b98d4251e54e95d37aa3ac5a76cf" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:39:40.246456 containerd[1564]: time="2026-03-02T13:39:40.245967831Z" level=info msg="connecting to shim f8e743d51b4ad0dfc5538a9bfbb70abc6e8c56136e9146a01d9228ddc1ec9418" address="unix:///run/containerd/s/ae2339db7979a99a0bee83fa88705ee8528b696040836025b0e6f849dba5f62b" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:39:40.689740 systemd[1]: Started cri-containerd-1e21185799432e708095605c460b6a534bbbb1a40cfb4d30f70b64fcd887864b.scope - libcontainer container 1e21185799432e708095605c460b6a534bbbb1a40cfb4d30f70b64fcd887864b. Mar 2 13:39:40.719393 systemd[1]: Started cri-containerd-f8e743d51b4ad0dfc5538a9bfbb70abc6e8c56136e9146a01d9228ddc1ec9418.scope - libcontainer container f8e743d51b4ad0dfc5538a9bfbb70abc6e8c56136e9146a01d9228ddc1ec9418. Mar 2 13:39:40.873867 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:39:40.884357 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:39:41.203513 containerd[1564]: time="2026-03-02T13:39:41.202416698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6tmn7,Uid:e47bd08f-5321-414d-a2eb-26fcfaced446,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8e743d51b4ad0dfc5538a9bfbb70abc6e8c56136e9146a01d9228ddc1ec9418\"" Mar 2 13:39:41.211396 kubelet[3001]: E0302 13:39:41.210986 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:41.221668 containerd[1564]: time="2026-03-02T13:39:41.219541794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xvbbf,Uid:c58be385-4c69-4909-846c-3f79600699be,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e21185799432e708095605c460b6a534bbbb1a40cfb4d30f70b64fcd887864b\"" Mar 2 13:39:41.228853 kubelet[3001]: E0302 13:39:41.224769 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:41.241345 containerd[1564]: time="2026-03-02T13:39:41.238115399Z" level=info msg="CreateContainer within sandbox \"f8e743d51b4ad0dfc5538a9bfbb70abc6e8c56136e9146a01d9228ddc1ec9418\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:39:41.270127 containerd[1564]: time="2026-03-02T13:39:41.261766062Z" level=info msg="CreateContainer within sandbox \"1e21185799432e708095605c460b6a534bbbb1a40cfb4d30f70b64fcd887864b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:39:41.424235 containerd[1564]: time="2026-03-02T13:39:41.421728852Z" level=info msg="Container ad3280b84c952a2d6ca78094ec00d314350f7bdbda1acb29ead6910434713679: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:39:41.445324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3448837029.mount: Deactivated successfully. Mar 2 13:39:41.522284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752545639.mount: Deactivated successfully. Mar 2 13:39:41.562783 containerd[1564]: time="2026-03-02T13:39:41.560962049Z" level=info msg="Container dda9bd193fc2464ffdabe2871c4493e7aaae96d81a2f9bf16d5f55dccd6b8f56: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:39:41.660543 containerd[1564]: time="2026-03-02T13:39:41.653532839Z" level=info msg="CreateContainer within sandbox \"f8e743d51b4ad0dfc5538a9bfbb70abc6e8c56136e9146a01d9228ddc1ec9418\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad3280b84c952a2d6ca78094ec00d314350f7bdbda1acb29ead6910434713679\"" Mar 2 13:39:41.665310 containerd[1564]: time="2026-03-02T13:39:41.665269947Z" level=info msg="StartContainer for \"ad3280b84c952a2d6ca78094ec00d314350f7bdbda1acb29ead6910434713679\"" Mar 2 13:39:41.674375 containerd[1564]: time="2026-03-02T13:39:41.673840866Z" level=info msg="connecting to shim ad3280b84c952a2d6ca78094ec00d314350f7bdbda1acb29ead6910434713679" address="unix:///run/containerd/s/ae2339db7979a99a0bee83fa88705ee8528b696040836025b0e6f849dba5f62b" protocol=ttrpc version=3 Mar 2 13:39:41.714745 containerd[1564]: time="2026-03-02T13:39:41.709977344Z" level=info msg="CreateContainer within sandbox \"1e21185799432e708095605c460b6a534bbbb1a40cfb4d30f70b64fcd887864b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dda9bd193fc2464ffdabe2871c4493e7aaae96d81a2f9bf16d5f55dccd6b8f56\"" Mar 2 13:39:41.802547 containerd[1564]: time="2026-03-02T13:39:41.793972226Z" level=info msg="StartContainer for \"dda9bd193fc2464ffdabe2871c4493e7aaae96d81a2f9bf16d5f55dccd6b8f56\"" Mar 2 13:39:41.805153 containerd[1564]: time="2026-03-02T13:39:41.804989452Z" level=info msg="connecting to shim dda9bd193fc2464ffdabe2871c4493e7aaae96d81a2f9bf16d5f55dccd6b8f56" address="unix:///run/containerd/s/dc170dce0f436d6e4388f4ab3692553d92d0b98d4251e54e95d37aa3ac5a76cf" protocol=ttrpc version=3 Mar 2 13:39:41.908462 systemd[1]: Started cri-containerd-ad3280b84c952a2d6ca78094ec00d314350f7bdbda1acb29ead6910434713679.scope - libcontainer container ad3280b84c952a2d6ca78094ec00d314350f7bdbda1acb29ead6910434713679. Mar 2 13:39:42.118293 systemd[1]: Started cri-containerd-dda9bd193fc2464ffdabe2871c4493e7aaae96d81a2f9bf16d5f55dccd6b8f56.scope - libcontainer container dda9bd193fc2464ffdabe2871c4493e7aaae96d81a2f9bf16d5f55dccd6b8f56. Mar 2 13:39:42.779813 containerd[1564]: time="2026-03-02T13:39:42.779767694Z" level=info msg="StartContainer for \"ad3280b84c952a2d6ca78094ec00d314350f7bdbda1acb29ead6910434713679\" returns successfully" Mar 2 13:39:42.798429 containerd[1564]: time="2026-03-02T13:39:42.796809993Z" level=info msg="StartContainer for \"dda9bd193fc2464ffdabe2871c4493e7aaae96d81a2f9bf16d5f55dccd6b8f56\" returns successfully" Mar 2 13:39:43.671814 kubelet[3001]: E0302 13:39:43.670441 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:43.776504 kubelet[3001]: E0302 13:39:43.776369 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:44.142408 kubelet[3001]: I0302 13:39:44.135777 3001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xvbbf" podStartSLOduration=30.134004569 podStartE2EDuration="30.134004569s" podCreationTimestamp="2026-03-02 13:39:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:39:43.968378241 +0000 UTC m=+261.023060461" watchObservedRunningTime="2026-03-02 13:39:44.134004569 +0000 UTC m=+261.188686759" Mar 2 13:39:44.149464 kubelet[3001]: I0302 13:39:44.136256 3001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6tmn7" podStartSLOduration=29.136244022 podStartE2EDuration="29.136244022s" podCreationTimestamp="2026-03-02 13:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:39:44.124519859 +0000 UTC m=+261.179202048" watchObservedRunningTime="2026-03-02 13:39:44.136244022 +0000 UTC m=+261.190926212" Mar 2 13:39:44.769274 kubelet[3001]: E0302 13:39:44.761983 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:44.769274 kubelet[3001]: E0302 13:39:44.763949 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:45.819444 kubelet[3001]: E0302 13:39:45.810306 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:45.826435 kubelet[3001]: E0302 13:39:45.826400 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:46.851820 kubelet[3001]: E0302 13:39:46.846111 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:39:50.613244 kubelet[3001]: E0302 13:39:50.611462 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:40:15.616730 kubelet[3001]: E0302 13:40:15.616268 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:40:25.356156 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Mar 2 13:40:25.948202 systemd-tmpfiles[4825]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 2 13:40:25.952809 systemd-tmpfiles[4825]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 2 13:40:25.958258 systemd-tmpfiles[4825]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:40:25.974460 systemd-tmpfiles[4825]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:40:25.988464 systemd-tmpfiles[4825]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:40:25.996461 systemd-tmpfiles[4825]: ACLs are not supported, ignoring. Mar 2 13:40:26.000296 systemd-tmpfiles[4825]: ACLs are not supported, ignoring. Mar 2 13:40:26.103394 systemd-tmpfiles[4825]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:40:26.104230 systemd-tmpfiles[4825]: Skipping /boot Mar 2 13:40:26.284828 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Mar 2 13:40:26.285391 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Mar 2 13:40:26.345298 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Mar 2 13:40:27.617355 kubelet[3001]: E0302 13:40:27.612830 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:40:39.614996 kubelet[3001]: E0302 13:40:39.612524 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:40:40.957287 containerd[1564]: time="2026-03-02T13:40:40.954544888Z" level=warning msg="container event discarded" container=aa4c761db3cc1e40b084eb8d2401a1390da4bfa7a8b086e7842c4a00967d7f39 type=CONTAINER_CREATED_EVENT Mar 2 13:40:40.957287 containerd[1564]: time="2026-03-02T13:40:40.954858313Z" level=warning msg="container event discarded" container=aa4c761db3cc1e40b084eb8d2401a1390da4bfa7a8b086e7842c4a00967d7f39 type=CONTAINER_STARTED_EVENT Mar 2 13:40:41.655429 containerd[1564]: time="2026-03-02T13:40:41.654528331Z" level=warning msg="container event discarded" container=a2925fc3fdaf1c0d59af50b47ed6ccc3133d086cbe560626833c5ed32e8cb564 type=CONTAINER_CREATED_EVENT Mar 2 13:40:43.636281 kubelet[3001]: E0302 13:40:43.615448 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:40:44.122817 containerd[1564]: time="2026-03-02T13:40:44.122466146Z" level=warning msg="container event discarded" container=a2925fc3fdaf1c0d59af50b47ed6ccc3133d086cbe560626833c5ed32e8cb564 type=CONTAINER_STARTED_EVENT Mar 2 13:40:50.575949 containerd[1564]: time="2026-03-02T13:40:50.575517662Z" level=warning msg="container event discarded" container=322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90 type=CONTAINER_CREATED_EVENT Mar 2 13:40:50.575949 containerd[1564]: time="2026-03-02T13:40:50.575901108Z" level=warning msg="container event discarded" container=322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90 type=CONTAINER_STARTED_EVENT Mar 2 13:40:51.334530 containerd[1564]: time="2026-03-02T13:40:51.334417823Z" level=warning msg="container event discarded" container=d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d type=CONTAINER_CREATED_EVENT Mar 2 13:40:51.344221 containerd[1564]: time="2026-03-02T13:40:51.334929027Z" level=warning msg="container event discarded" container=d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d type=CONTAINER_STARTED_EVENT Mar 2 13:40:54.509894 kubelet[3001]: E0302 13:40:54.509780 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:40:54.531294 kubelet[3001]: E0302 13:40:54.524198 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:40:59.805950 kubelet[3001]: E0302 13:40:59.802300 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.166s" Mar 2 13:41:02.230845 kubelet[3001]: E0302 13:41:02.230726 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:41:03.620360 kubelet[3001]: E0302 13:41:03.618964 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:41:36.899914 kubelet[3001]: E0302 13:41:36.899192 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.275s" Mar 2 13:41:45.615799 kubelet[3001]: E0302 13:41:45.614312 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:41:45.615799 kubelet[3001]: E0302 13:41:45.615705 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:41:59.191913 kubelet[3001]: E0302 13:41:59.184980 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.573s" Mar 2 13:41:59.191913 kubelet[3001]: E0302 13:41:59.188447 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:42:09.614526 kubelet[3001]: E0302 13:42:09.613967 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:42:23.609300 kubelet[3001]: E0302 13:42:23.608835 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:42:24.615775 kubelet[3001]: E0302 13:42:24.614950 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:42:25.613529 kubelet[3001]: E0302 13:42:25.610529 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:42:31.610140 kubelet[3001]: E0302 13:42:31.608242 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:42:46.293933 kubelet[3001]: E0302 13:42:46.292505 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.568s" Mar 2 13:42:52.817353 containerd[1564]: time="2026-03-02T13:42:52.817161879Z" level=warning msg="container event discarded" container=d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9 type=CONTAINER_CREATED_EVENT Mar 2 13:42:53.648295 kubelet[3001]: E0302 13:42:53.641864 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:42:55.046438 containerd[1564]: time="2026-03-02T13:42:55.044881258Z" level=warning msg="container event discarded" container=d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9 type=CONTAINER_STARTED_EVENT Mar 2 13:42:56.918343 containerd[1564]: time="2026-03-02T13:42:56.915963769Z" level=warning msg="container event discarded" container=d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9 type=CONTAINER_STOPPED_EVENT Mar 2 13:43:03.926383 containerd[1564]: time="2026-03-02T13:43:03.926239873Z" level=warning msg="container event discarded" container=10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80 type=CONTAINER_CREATED_EVENT Mar 2 13:43:05.435506 containerd[1564]: time="2026-03-02T13:43:05.435407418Z" level=warning msg="container event discarded" container=10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80 type=CONTAINER_STARTED_EVENT Mar 2 13:43:07.786492 containerd[1564]: time="2026-03-02T13:43:07.781470568Z" level=warning msg="container event discarded" container=10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80 type=CONTAINER_STOPPED_EVENT Mar 2 13:43:09.646725 containerd[1564]: time="2026-03-02T13:43:09.637233457Z" level=warning msg="container event discarded" container=6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab type=CONTAINER_CREATED_EVENT Mar 2 13:43:11.482321 containerd[1564]: time="2026-03-02T13:43:11.470969505Z" level=warning msg="container event discarded" container=6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab type=CONTAINER_STARTED_EVENT Mar 2 13:43:11.624332 kubelet[3001]: E0302 13:43:11.621456 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:43:14.323298 containerd[1564]: time="2026-03-02T13:43:14.321929598Z" level=warning msg="container event discarded" container=6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab type=CONTAINER_STOPPED_EVENT Mar 2 13:43:15.822303 containerd[1564]: time="2026-03-02T13:43:15.822220041Z" level=warning msg="container event discarded" container=0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c type=CONTAINER_CREATED_EVENT Mar 2 13:43:17.025845 containerd[1564]: time="2026-03-02T13:43:17.025511023Z" level=warning msg="container event discarded" container=0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c type=CONTAINER_STARTED_EVENT Mar 2 13:43:17.482927 containerd[1564]: time="2026-03-02T13:43:17.482500408Z" level=warning msg="container event discarded" container=0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c type=CONTAINER_STOPPED_EVENT Mar 2 13:43:17.618473 kubelet[3001]: E0302 13:43:17.608842 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:43:18.257818 containerd[1564]: time="2026-03-02T13:43:18.254461576Z" level=warning msg="container event discarded" container=8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba type=CONTAINER_CREATED_EVENT Mar 2 13:43:19.085245 containerd[1564]: time="2026-03-02T13:43:19.084524729Z" level=warning msg="container event discarded" container=8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba type=CONTAINER_STARTED_EVENT Mar 2 13:43:23.491792 containerd[1564]: time="2026-03-02T13:43:23.491464893Z" level=warning msg="container event discarded" container=c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8 type=CONTAINER_CREATED_EVENT Mar 2 13:43:24.595504 containerd[1564]: time="2026-03-02T13:43:24.595400189Z" level=warning msg="container event discarded" container=c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8 type=CONTAINER_STARTED_EVENT Mar 2 13:43:31.608702 kubelet[3001]: E0302 13:43:31.608451 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:43:35.609441 kubelet[3001]: E0302 13:43:35.608894 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:43:38.616282 kubelet[3001]: E0302 13:43:38.616188 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:43:47.981835 kubelet[3001]: E0302 13:43:47.981109 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.194s" Mar 2 13:43:48.096495 kubelet[3001]: E0302 13:43:48.096458 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:43:54.937151 kubelet[3001]: E0302 13:43:54.936748 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:44:08.615470 kubelet[3001]: E0302 13:44:08.614539 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:44:10.530411 systemd[1]: Started sshd@9-10.0.0.75:22-10.0.0.1:42884.service - OpenSSH per-connection server daemon (10.0.0.1:42884). Mar 2 13:44:11.579096 sshd[4857]: Accepted publickey for core from 10.0.0.1 port 42884 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:44:11.608537 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:44:11.727853 systemd-logind[1541]: New session 10 of user core. Mar 2 13:44:11.786502 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:44:13.747258 sshd[4860]: Connection closed by 10.0.0.1 port 42884 Mar 2 13:44:13.741925 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Mar 2 13:44:13.800826 systemd[1]: sshd@9-10.0.0.75:22-10.0.0.1:42884.service: Deactivated successfully. Mar 2 13:44:13.806879 systemd-logind[1541]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:44:13.846068 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:44:13.881392 systemd-logind[1541]: Removed session 10. Mar 2 13:44:17.627937 kubelet[3001]: E0302 13:44:17.610748 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:44:19.006901 systemd[1]: Started sshd@10-10.0.0.75:22-10.0.0.1:42894.service - OpenSSH per-connection server daemon (10.0.0.1:42894). Mar 2 13:44:19.409535 sshd[4880]: Accepted publickey for core from 10.0.0.1 port 42894 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:44:19.425783 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:44:19.514402 systemd-logind[1541]: New session 11 of user core. Mar 2 13:44:19.544280 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:44:20.611848 kubelet[3001]: E0302 13:44:20.611098 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:44:21.493233 sshd[4883]: Connection closed by 10.0.0.1 port 42894 Mar 2 13:44:21.506811 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Mar 2 13:44:21.556410 systemd[1]: sshd@10-10.0.0.75:22-10.0.0.1:42894.service: Deactivated successfully. Mar 2 13:44:21.589866 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:44:21.608449 systemd-logind[1541]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:44:21.624156 systemd-logind[1541]: Removed session 11. Mar 2 13:44:26.627333 systemd[1]: Started sshd@11-10.0.0.75:22-10.0.0.1:56024.service - OpenSSH per-connection server daemon (10.0.0.1:56024). Mar 2 13:44:27.223818 sshd[4903]: Accepted publickey for core from 10.0.0.1 port 56024 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:44:27.243233 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:44:27.317492 systemd-logind[1541]: New session 12 of user core. Mar 2 13:44:27.340903 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:44:28.902198 sshd[4906]: Connection closed by 10.0.0.1 port 56024 Mar 2 13:44:28.910797 sshd-session[4903]: pam_unix(sshd:session): session closed for user core Mar 2 13:44:28.965131 systemd[1]: sshd@11-10.0.0.75:22-10.0.0.1:56024.service: Deactivated successfully. Mar 2 13:44:28.997282 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:44:29.004235 systemd-logind[1541]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:44:29.009809 systemd-logind[1541]: Removed session 12. Mar 2 13:44:34.007901 systemd[1]: Started sshd@12-10.0.0.75:22-10.0.0.1:51818.service - OpenSSH per-connection server daemon (10.0.0.1:51818). Mar 2 13:44:34.644329 kubelet[3001]: E0302 13:44:34.634825 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:44:34.977891 sshd[4921]: Accepted publickey for core from 10.0.0.1 port 51818 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:44:34.996858 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:44:35.110700 systemd-logind[1541]: New session 13 of user core. Mar 2 13:44:35.130872 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:44:36.262532 sshd[4924]: Connection closed by 10.0.0.1 port 51818 Mar 2 13:44:36.262337 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Mar 2 13:44:36.291897 systemd[1]: sshd@12-10.0.0.75:22-10.0.0.1:51818.service: Deactivated successfully. Mar 2 13:44:36.306112 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:44:36.337073 systemd-logind[1541]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:44:36.433439 systemd-logind[1541]: Removed session 13. Mar 2 13:44:41.225465 containerd[1564]: time="2026-03-02T13:44:41.213833107Z" level=warning msg="container event discarded" container=f8e743d51b4ad0dfc5538a9bfbb70abc6e8c56136e9146a01d9228ddc1ec9418 type=CONTAINER_CREATED_EVENT Mar 2 13:44:41.225465 containerd[1564]: time="2026-03-02T13:44:41.222367038Z" level=warning msg="container event discarded" container=f8e743d51b4ad0dfc5538a9bfbb70abc6e8c56136e9146a01d9228ddc1ec9418 type=CONTAINER_STARTED_EVENT Mar 2 13:44:41.260095 containerd[1564]: time="2026-03-02T13:44:41.240404710Z" level=warning msg="container event discarded" container=1e21185799432e708095605c460b6a534bbbb1a40cfb4d30f70b64fcd887864b type=CONTAINER_CREATED_EVENT Mar 2 13:44:41.260095 containerd[1564]: time="2026-03-02T13:44:41.240464973Z" level=warning msg="container event discarded" container=1e21185799432e708095605c460b6a534bbbb1a40cfb4d30f70b64fcd887864b type=CONTAINER_STARTED_EVENT Mar 2 13:44:41.354093 systemd[1]: Started sshd@13-10.0.0.75:22-10.0.0.1:57334.service - OpenSSH per-connection server daemon (10.0.0.1:57334). Mar 2 13:44:41.652172 containerd[1564]: time="2026-03-02T13:44:41.651926459Z" level=warning msg="container event discarded" container=ad3280b84c952a2d6ca78094ec00d314350f7bdbda1acb29ead6910434713679 type=CONTAINER_CREATED_EVENT Mar 2 13:44:41.725799 containerd[1564]: time="2026-03-02T13:44:41.724757278Z" level=warning msg="container event discarded" container=dda9bd193fc2464ffdabe2871c4493e7aaae96d81a2f9bf16d5f55dccd6b8f56 type=CONTAINER_CREATED_EVENT Mar 2 13:44:41.962933 sshd[4944]: Accepted publickey for core from 10.0.0.1 port 57334 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:44:41.965266 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:44:42.036424 systemd-logind[1541]: New session 14 of user core. Mar 2 13:44:42.090063 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:44:42.753461 containerd[1564]: time="2026-03-02T13:44:42.753359385Z" level=warning msg="container event discarded" container=ad3280b84c952a2d6ca78094ec00d314350f7bdbda1acb29ead6910434713679 type=CONTAINER_STARTED_EVENT Mar 2 13:44:42.795068 containerd[1564]: time="2026-03-02T13:44:42.794882612Z" level=warning msg="container event discarded" container=dda9bd193fc2464ffdabe2871c4493e7aaae96d81a2f9bf16d5f55dccd6b8f56 type=CONTAINER_STARTED_EVENT Mar 2 13:44:42.910905 sshd[4948]: Connection closed by 10.0.0.1 port 57334 Mar 2 13:44:42.920224 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Mar 2 13:44:42.961441 systemd[1]: sshd@13-10.0.0.75:22-10.0.0.1:57334.service: Deactivated successfully. Mar 2 13:44:42.986381 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:44:43.020368 systemd-logind[1541]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:44:43.032860 systemd-logind[1541]: Removed session 14. Mar 2 13:44:47.952354 systemd[1]: Started sshd@14-10.0.0.75:22-10.0.0.1:57342.service - OpenSSH per-connection server daemon (10.0.0.1:57342). Mar 2 13:44:48.338820 sshd[4963]: Accepted publickey for core from 10.0.0.1 port 57342 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:44:48.355089 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:44:48.396222 systemd-logind[1541]: New session 15 of user core. Mar 2 13:44:48.428548 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:44:49.047450 sshd[4966]: Connection closed by 10.0.0.1 port 57342 Mar 2 13:44:49.050382 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Mar 2 13:44:49.076850 systemd[1]: sshd@14-10.0.0.75:22-10.0.0.1:57342.service: Deactivated successfully. Mar 2 13:44:49.111079 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:44:49.136789 systemd-logind[1541]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:44:49.152543 systemd-logind[1541]: Removed session 15. Mar 2 13:44:51.612087 kubelet[3001]: E0302 13:44:51.611426 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:44:53.618758 kubelet[3001]: E0302 13:44:53.618376 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:44:54.137327 systemd[1]: Started sshd@15-10.0.0.75:22-10.0.0.1:35850.service - OpenSSH per-connection server daemon (10.0.0.1:35850). Mar 2 13:44:54.877119 sshd[4984]: Accepted publickey for core from 10.0.0.1 port 35850 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:44:54.908056 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:44:54.986297 systemd-logind[1541]: New session 16 of user core. Mar 2 13:44:55.010528 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:44:55.957312 sshd[4988]: Connection closed by 10.0.0.1 port 35850 Mar 2 13:44:55.954147 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Mar 2 13:44:55.991274 systemd[1]: sshd@15-10.0.0.75:22-10.0.0.1:35850.service: Deactivated successfully. Mar 2 13:44:56.028390 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:44:56.049167 systemd-logind[1541]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:44:56.077364 systemd-logind[1541]: Removed session 16. Mar 2 13:45:01.041841 systemd[1]: Started sshd@16-10.0.0.75:22-10.0.0.1:39778.service - OpenSSH per-connection server daemon (10.0.0.1:39778). Mar 2 13:45:01.441443 sshd[5003]: Accepted publickey for core from 10.0.0.1 port 39778 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:45:01.450301 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:45:01.525492 systemd-logind[1541]: New session 17 of user core. Mar 2 13:45:01.557336 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:45:03.204827 sshd[5006]: Connection closed by 10.0.0.1 port 39778 Mar 2 13:45:03.208402 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Mar 2 13:45:03.220000 systemd[1]: sshd@16-10.0.0.75:22-10.0.0.1:39778.service: Deactivated successfully. Mar 2 13:45:03.242327 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:45:03.285459 systemd-logind[1541]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:45:03.310501 systemd-logind[1541]: Removed session 17. Mar 2 13:45:04.644353 kubelet[3001]: E0302 13:45:04.625778 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:45:08.259113 systemd[1]: Started sshd@17-10.0.0.75:22-10.0.0.1:39784.service - OpenSSH per-connection server daemon (10.0.0.1:39784). Mar 2 13:45:08.993011 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 39784 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:45:09.051315 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:45:09.134433 systemd-logind[1541]: New session 18 of user core. Mar 2 13:45:09.223516 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:45:10.387319 sshd[5027]: Connection closed by 10.0.0.1 port 39784 Mar 2 13:45:10.396126 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Mar 2 13:45:10.415517 systemd-logind[1541]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:45:10.421412 systemd[1]: sshd@17-10.0.0.75:22-10.0.0.1:39784.service: Deactivated successfully. Mar 2 13:45:10.449299 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:45:10.474251 systemd-logind[1541]: Removed session 18. Mar 2 13:45:23.377521 systemd[1]: Started sshd@18-10.0.0.75:22-10.0.0.1:48746.service - OpenSSH per-connection server daemon (10.0.0.1:48746). Mar 2 13:45:23.408771 kubelet[3001]: E0302 13:45:23.408112 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:45:24.663658 kubelet[3001]: E0302 13:45:24.662388 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:45:24.663658 kubelet[3001]: E0302 13:45:24.662768 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:45:25.196709 kubelet[3001]: E0302 13:45:25.194178 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:45:26.026100 kubelet[3001]: E0302 13:45:26.024756 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.806s" Mar 2 13:45:26.106100 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 48746 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:45:26.130661 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:45:26.360957 systemd-logind[1541]: New session 19 of user core. Mar 2 13:45:26.399713 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:45:27.494977 sshd[5049]: Connection closed by 10.0.0.1 port 48746 Mar 2 13:45:27.497709 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Mar 2 13:45:27.522683 systemd[1]: sshd@18-10.0.0.75:22-10.0.0.1:48746.service: Deactivated successfully. Mar 2 13:45:27.532042 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:45:27.558376 systemd-logind[1541]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:45:27.588403 systemd-logind[1541]: Removed session 19. Mar 2 13:45:32.549123 systemd[1]: Started sshd@19-10.0.0.75:22-10.0.0.1:46218.service - OpenSSH per-connection server daemon (10.0.0.1:46218). Mar 2 13:45:32.856862 sshd[5064]: Accepted publickey for core from 10.0.0.1 port 46218 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:45:32.864741 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:45:32.928408 systemd-logind[1541]: New session 20 of user core. Mar 2 13:45:32.964258 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:45:34.046865 sshd[5067]: Connection closed by 10.0.0.1 port 46218 Mar 2 13:45:34.049308 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Mar 2 13:45:34.083953 systemd[1]: sshd@19-10.0.0.75:22-10.0.0.1:46218.service: Deactivated successfully. Mar 2 13:45:34.100495 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:45:34.118374 systemd-logind[1541]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:45:34.136189 systemd-logind[1541]: Removed session 20. Mar 2 13:45:39.117175 systemd[1]: Started sshd@20-10.0.0.75:22-10.0.0.1:46226.service - OpenSSH per-connection server daemon (10.0.0.1:46226). Mar 2 13:45:39.444135 sshd[5081]: Accepted publickey for core from 10.0.0.1 port 46226 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:45:39.446473 sshd-session[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:45:39.535841 systemd-logind[1541]: New session 21 of user core. Mar 2 13:45:39.584500 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:45:40.530957 sshd[5084]: Connection closed by 10.0.0.1 port 46226 Mar 2 13:45:40.533464 sshd-session[5081]: pam_unix(sshd:session): session closed for user core Mar 2 13:45:40.597246 systemd[1]: sshd@20-10.0.0.75:22-10.0.0.1:46226.service: Deactivated successfully. Mar 2 13:45:40.600515 systemd-logind[1541]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:45:40.617277 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:45:40.646711 systemd-logind[1541]: Removed session 21. Mar 2 13:45:45.593021 systemd[1]: Started sshd@21-10.0.0.75:22-10.0.0.1:45842.service - OpenSSH per-connection server daemon (10.0.0.1:45842). Mar 2 13:45:45.826080 sshd[5100]: Accepted publickey for core from 10.0.0.1 port 45842 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:45:45.837816 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:45:45.880248 systemd-logind[1541]: New session 22 of user core. Mar 2 13:45:45.888020 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:45:46.457705 sshd[5103]: Connection closed by 10.0.0.1 port 45842 Mar 2 13:45:46.458746 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Mar 2 13:45:46.503197 systemd-logind[1541]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:45:46.506026 systemd[1]: sshd@21-10.0.0.75:22-10.0.0.1:45842.service: Deactivated successfully. Mar 2 13:45:46.516088 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:45:46.522978 systemd-logind[1541]: Removed session 22. Mar 2 13:45:51.510978 systemd[1]: Started sshd@22-10.0.0.75:22-10.0.0.1:43478.service - OpenSSH per-connection server daemon (10.0.0.1:43478). Mar 2 13:45:51.689237 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 43478 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:45:51.693518 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:45:51.738387 systemd-logind[1541]: New session 23 of user core. Mar 2 13:45:51.751981 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:45:52.089429 sshd[5121]: Connection closed by 10.0.0.1 port 43478 Mar 2 13:45:52.090519 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Mar 2 13:45:52.107476 systemd[1]: sshd@22-10.0.0.75:22-10.0.0.1:43478.service: Deactivated successfully. Mar 2 13:45:52.117297 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:45:52.136938 systemd-logind[1541]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:45:52.145671 systemd-logind[1541]: Removed session 23. Mar 2 13:45:57.181922 systemd[1]: Started sshd@23-10.0.0.75:22-10.0.0.1:43494.service - OpenSSH per-connection server daemon (10.0.0.1:43494). Mar 2 13:45:57.495417 sshd[5138]: Accepted publickey for core from 10.0.0.1 port 43494 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:45:57.503546 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:45:57.567105 systemd-logind[1541]: New session 24 of user core. Mar 2 13:45:57.583114 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:45:58.177035 sshd[5141]: Connection closed by 10.0.0.1 port 43494 Mar 2 13:45:58.179899 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Mar 2 13:45:58.221052 systemd[1]: sshd@23-10.0.0.75:22-10.0.0.1:43494.service: Deactivated successfully. Mar 2 13:45:58.243093 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:45:58.269273 systemd-logind[1541]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:45:58.286989 systemd-logind[1541]: Removed session 24. Mar 2 13:46:02.616543 kubelet[3001]: E0302 13:46:02.609537 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:46:03.710260 systemd[1]: Started sshd@24-10.0.0.75:22-10.0.0.1:59832.service - OpenSSH per-connection server daemon (10.0.0.1:59832). Mar 2 13:46:04.004002 sshd[5156]: Accepted publickey for core from 10.0.0.1 port 59832 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:46:04.017491 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:46:04.084860 systemd-logind[1541]: New session 25 of user core. Mar 2 13:46:04.103339 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:46:04.700925 kubelet[3001]: E0302 13:46:04.687011 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:46:04.715205 sshd[5159]: Connection closed by 10.0.0.1 port 59832 Mar 2 13:46:04.729513 sshd-session[5156]: pam_unix(sshd:session): session closed for user core Mar 2 13:46:04.752897 systemd[1]: sshd@24-10.0.0.75:22-10.0.0.1:59832.service: Deactivated successfully. Mar 2 13:46:04.777372 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:46:04.791150 systemd-logind[1541]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:46:04.794389 systemd-logind[1541]: Removed session 25. Mar 2 13:46:11.026283 systemd[1]: Started sshd@25-10.0.0.75:22-10.0.0.1:59842.service - OpenSSH per-connection server daemon (10.0.0.1:59842). Mar 2 13:46:11.743855 kubelet[3001]: E0302 13:46:11.742066 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.042s" Mar 2 13:46:11.767013 kubelet[3001]: E0302 13:46:11.761264 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:46:12.345364 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 59842 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:46:12.356474 sshd-session[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:46:12.418277 systemd-logind[1541]: New session 26 of user core. Mar 2 13:46:12.532894 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 13:46:14.739355 kubelet[3001]: E0302 13:46:14.729203 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:46:15.149296 sshd[5177]: Connection closed by 10.0.0.1 port 59842 Mar 2 13:46:15.151173 sshd-session[5174]: pam_unix(sshd:session): session closed for user core Mar 2 13:46:15.164021 systemd[1]: sshd@25-10.0.0.75:22-10.0.0.1:59842.service: Deactivated successfully. Mar 2 13:46:15.184509 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 13:46:15.194920 systemd-logind[1541]: Session 26 logged out. Waiting for processes to exit. Mar 2 13:46:15.208474 systemd-logind[1541]: Removed session 26. Mar 2 13:46:20.242224 systemd[1]: Started sshd@26-10.0.0.75:22-10.0.0.1:56686.service - OpenSSH per-connection server daemon (10.0.0.1:56686). Mar 2 13:46:20.717861 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 56686 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:46:20.733093 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:46:20.794879 systemd-logind[1541]: New session 27 of user core. Mar 2 13:46:20.816325 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 13:46:21.529061 sshd[5194]: Connection closed by 10.0.0.1 port 56686 Mar 2 13:46:21.531541 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Mar 2 13:46:21.544085 systemd[1]: sshd@26-10.0.0.75:22-10.0.0.1:56686.service: Deactivated successfully. Mar 2 13:46:21.552221 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 13:46:21.593240 systemd-logind[1541]: Session 27 logged out. Waiting for processes to exit. Mar 2 13:46:21.626210 systemd-logind[1541]: Removed session 27. Mar 2 13:46:25.623410 kubelet[3001]: E0302 13:46:25.618091 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:46:26.628263 kubelet[3001]: E0302 13:46:26.622178 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:46:26.641100 systemd[1]: Started sshd@27-10.0.0.75:22-10.0.0.1:56692.service - OpenSSH per-connection server daemon (10.0.0.1:56692). Mar 2 13:46:27.106917 sshd[5213]: Accepted publickey for core from 10.0.0.1 port 56692 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:46:27.139391 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:46:27.213256 systemd-logind[1541]: New session 28 of user core. Mar 2 13:46:27.282997 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 2 13:46:28.718115 sshd[5216]: Connection closed by 10.0.0.1 port 56692 Mar 2 13:46:28.704955 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Mar 2 13:46:28.747397 systemd[1]: sshd@27-10.0.0.75:22-10.0.0.1:56692.service: Deactivated successfully. Mar 2 13:46:28.792400 systemd[1]: session-28.scope: Deactivated successfully. Mar 2 13:46:28.816403 systemd-logind[1541]: Session 28 logged out. Waiting for processes to exit. Mar 2 13:46:28.851049 systemd-logind[1541]: Removed session 28. Mar 2 13:46:32.609793 kubelet[3001]: E0302 13:46:32.609236 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:46:33.627910 kubelet[3001]: E0302 13:46:33.612285 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:46:33.837356 systemd[1]: Started sshd@28-10.0.0.75:22-10.0.0.1:38794.service - OpenSSH per-connection server daemon (10.0.0.1:38794). Mar 2 13:46:34.420114 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 38794 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:46:34.429872 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:46:34.484104 systemd-logind[1541]: New session 29 of user core. Mar 2 13:46:34.534186 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 2 13:46:35.537355 sshd[5234]: Connection closed by 10.0.0.1 port 38794 Mar 2 13:46:35.536403 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Mar 2 13:46:35.583534 systemd[1]: sshd@28-10.0.0.75:22-10.0.0.1:38794.service: Deactivated successfully. Mar 2 13:46:35.598033 systemd[1]: session-29.scope: Deactivated successfully. Mar 2 13:46:35.621921 systemd-logind[1541]: Session 29 logged out. Waiting for processes to exit. Mar 2 13:46:35.665361 systemd-logind[1541]: Removed session 29. Mar 2 13:46:40.603237 systemd[1]: Started sshd@29-10.0.0.75:22-10.0.0.1:49010.service - OpenSSH per-connection server daemon (10.0.0.1:49010). Mar 2 13:46:41.019517 sshd[5248]: Accepted publickey for core from 10.0.0.1 port 49010 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:46:41.037304 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:46:41.092942 systemd-logind[1541]: New session 30 of user core. Mar 2 13:46:41.145505 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 2 13:46:42.126975 sshd[5251]: Connection closed by 10.0.0.1 port 49010 Mar 2 13:46:42.130067 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Mar 2 13:46:42.161197 systemd[1]: sshd@29-10.0.0.75:22-10.0.0.1:49010.service: Deactivated successfully. Mar 2 13:46:42.193388 systemd[1]: session-30.scope: Deactivated successfully. Mar 2 13:46:42.204936 systemd-logind[1541]: Session 30 logged out. Waiting for processes to exit. Mar 2 13:46:42.213348 systemd-logind[1541]: Removed session 30. Mar 2 13:46:47.211537 systemd[1]: Started sshd@30-10.0.0.75:22-10.0.0.1:49036.service - OpenSSH per-connection server daemon (10.0.0.1:49036). Mar 2 13:46:47.696408 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 49036 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:46:47.725073 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:46:47.819166 systemd-logind[1541]: New session 31 of user core. Mar 2 13:46:47.864924 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 2 13:46:49.652188 sshd[5268]: Connection closed by 10.0.0.1 port 49036 Mar 2 13:46:49.650982 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Mar 2 13:46:49.721329 systemd[1]: sshd@30-10.0.0.75:22-10.0.0.1:49036.service: Deactivated successfully. Mar 2 13:46:49.724142 systemd-logind[1541]: Session 31 logged out. Waiting for processes to exit. Mar 2 13:46:49.752098 systemd[1]: session-31.scope: Deactivated successfully. Mar 2 13:46:49.773450 systemd-logind[1541]: Removed session 31. Mar 2 13:46:54.724342 systemd[1]: Started sshd@31-10.0.0.75:22-10.0.0.1:59952.service - OpenSSH per-connection server daemon (10.0.0.1:59952). Mar 2 13:46:55.192068 sshd[5286]: Accepted publickey for core from 10.0.0.1 port 59952 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:46:55.215426 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:46:55.284980 systemd-logind[1541]: New session 32 of user core. Mar 2 13:46:55.338894 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 2 13:46:56.317101 sshd[5289]: Connection closed by 10.0.0.1 port 59952 Mar 2 13:46:56.339515 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Mar 2 13:46:56.386482 systemd[1]: sshd@31-10.0.0.75:22-10.0.0.1:59952.service: Deactivated successfully. Mar 2 13:46:56.396316 systemd[1]: session-32.scope: Deactivated successfully. Mar 2 13:46:56.419162 systemd-logind[1541]: Session 32 logged out. Waiting for processes to exit. Mar 2 13:46:56.441297 systemd-logind[1541]: Removed session 32. Mar 2 13:47:01.426177 systemd[1]: Started sshd@32-10.0.0.75:22-10.0.0.1:60416.service - OpenSSH per-connection server daemon (10.0.0.1:60416). Mar 2 13:47:01.756372 sshd[5303]: Accepted publickey for core from 10.0.0.1 port 60416 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:47:01.751356 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:47:01.808002 systemd-logind[1541]: New session 33 of user core. Mar 2 13:47:01.849028 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 2 13:47:02.864502 sshd[5306]: Connection closed by 10.0.0.1 port 60416 Mar 2 13:47:02.880292 sshd-session[5303]: pam_unix(sshd:session): session closed for user core Mar 2 13:47:02.912147 systemd[1]: sshd@32-10.0.0.75:22-10.0.0.1:60416.service: Deactivated successfully. Mar 2 13:47:02.927933 systemd[1]: session-33.scope: Deactivated successfully. Mar 2 13:47:02.987081 systemd-logind[1541]: Session 33 logged out. Waiting for processes to exit. Mar 2 13:47:03.000031 systemd-logind[1541]: Removed session 33. Mar 2 13:47:08.931895 systemd[1]: Started sshd@33-10.0.0.75:22-10.0.0.1:60432.service - OpenSSH per-connection server daemon (10.0.0.1:60432). Mar 2 13:47:09.144856 kubelet[3001]: E0302 13:47:09.141799 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:47:09.521432 sshd[5321]: Accepted publickey for core from 10.0.0.1 port 60432 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:47:09.533267 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:47:09.598905 systemd-logind[1541]: New session 34 of user core. Mar 2 13:47:09.616347 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 2 13:47:10.695860 sshd[5324]: Connection closed by 10.0.0.1 port 60432 Mar 2 13:47:10.697971 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Mar 2 13:47:10.739165 systemd[1]: sshd@33-10.0.0.75:22-10.0.0.1:60432.service: Deactivated successfully. Mar 2 13:47:10.754276 systemd[1]: session-34.scope: Deactivated successfully. Mar 2 13:47:10.767468 systemd-logind[1541]: Session 34 logged out. Waiting for processes to exit. Mar 2 13:47:10.790263 systemd-logind[1541]: Removed session 34. Mar 2 13:47:12.638836 kubelet[3001]: E0302 13:47:12.631166 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:47:15.739450 systemd[1]: Started sshd@34-10.0.0.75:22-10.0.0.1:52470.service - OpenSSH per-connection server daemon (10.0.0.1:52470). Mar 2 13:47:16.116277 sshd[5339]: Accepted publickey for core from 10.0.0.1 port 52470 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:47:16.119947 sshd-session[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:47:16.164369 systemd-logind[1541]: New session 35 of user core. Mar 2 13:47:16.209478 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 2 13:47:17.150364 sshd[5342]: Connection closed by 10.0.0.1 port 52470 Mar 2 13:47:17.166460 sshd-session[5339]: pam_unix(sshd:session): session closed for user core Mar 2 13:47:17.236275 systemd[1]: sshd@34-10.0.0.75:22-10.0.0.1:52470.service: Deactivated successfully. Mar 2 13:47:17.263339 systemd[1]: session-35.scope: Deactivated successfully. Mar 2 13:47:17.299210 systemd-logind[1541]: Session 35 logged out. Waiting for processes to exit. Mar 2 13:47:17.356004 systemd[1]: Started sshd@35-10.0.0.75:22-10.0.0.1:52500.service - OpenSSH per-connection server daemon (10.0.0.1:52500). Mar 2 13:47:17.392355 systemd-logind[1541]: Removed session 35. Mar 2 13:47:17.930141 sshd[5357]: Accepted publickey for core from 10.0.0.1 port 52500 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:47:17.950402 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:47:18.039952 systemd-logind[1541]: New session 36 of user core. Mar 2 13:47:18.063004 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 2 13:47:19.633838 sshd[5360]: Connection closed by 10.0.0.1 port 52500 Mar 2 13:47:19.633855 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Mar 2 13:47:19.719929 systemd[1]: sshd@35-10.0.0.75:22-10.0.0.1:52500.service: Deactivated successfully. Mar 2 13:47:19.741918 systemd[1]: session-36.scope: Deactivated successfully. Mar 2 13:47:19.755177 systemd-logind[1541]: Session 36 logged out. Waiting for processes to exit. Mar 2 13:47:19.799026 systemd[1]: Started sshd@36-10.0.0.75:22-10.0.0.1:52506.service - OpenSSH per-connection server daemon (10.0.0.1:52506). Mar 2 13:47:19.808350 systemd-logind[1541]: Removed session 36. Mar 2 13:47:20.475314 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 52506 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:47:20.493401 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:47:20.538851 systemd-logind[1541]: New session 37 of user core. Mar 2 13:47:20.570454 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 2 13:47:21.482835 sshd[5376]: Connection closed by 10.0.0.1 port 52506 Mar 2 13:47:21.478962 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Mar 2 13:47:21.496959 systemd[1]: sshd@36-10.0.0.75:22-10.0.0.1:52506.service: Deactivated successfully. Mar 2 13:47:21.508341 systemd[1]: session-37.scope: Deactivated successfully. Mar 2 13:47:21.514475 systemd-logind[1541]: Session 37 logged out. Waiting for processes to exit. Mar 2 13:47:21.525244 systemd-logind[1541]: Removed session 37. Mar 2 13:47:24.628797 kubelet[3001]: E0302 13:47:24.625252 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:47:26.533249 systemd[1]: Started sshd@37-10.0.0.75:22-10.0.0.1:33732.service - OpenSSH per-connection server daemon (10.0.0.1:33732). Mar 2 13:47:26.910993 sshd[5399]: Accepted publickey for core from 10.0.0.1 port 33732 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:47:26.916041 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:47:26.947156 systemd-logind[1541]: New session 38 of user core. Mar 2 13:47:26.961200 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 2 13:47:27.602722 sshd[5402]: Connection closed by 10.0.0.1 port 33732 Mar 2 13:47:27.601957 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Mar 2 13:47:27.628323 systemd[1]: sshd@37-10.0.0.75:22-10.0.0.1:33732.service: Deactivated successfully. Mar 2 13:47:27.636424 systemd[1]: session-38.scope: Deactivated successfully. Mar 2 13:47:27.649142 systemd-logind[1541]: Session 38 logged out. Waiting for processes to exit. Mar 2 13:47:27.659378 systemd-logind[1541]: Removed session 38. Mar 2 13:47:31.618704 kubelet[3001]: E0302 13:47:31.611895 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:47:32.665037 systemd[1]: Started sshd@38-10.0.0.75:22-10.0.0.1:58838.service - OpenSSH per-connection server daemon (10.0.0.1:58838). Mar 2 13:47:33.029967 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 58838 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:47:33.038968 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:47:33.101026 systemd-logind[1541]: New session 39 of user core. Mar 2 13:47:33.145003 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 2 13:47:33.792109 sshd[5419]: Connection closed by 10.0.0.1 port 58838 Mar 2 13:47:33.793388 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Mar 2 13:47:33.857100 systemd[1]: sshd@38-10.0.0.75:22-10.0.0.1:58838.service: Deactivated successfully. Mar 2 13:47:33.934794 systemd[1]: session-39.scope: Deactivated successfully. Mar 2 13:47:33.956399 systemd-logind[1541]: Session 39 logged out. Waiting for processes to exit. Mar 2 13:47:33.999891 systemd-logind[1541]: Removed session 39. Mar 2 13:47:38.837171 systemd[1]: Started sshd@39-10.0.0.75:22-10.0.0.1:58844.service - OpenSSH per-connection server daemon (10.0.0.1:58844). Mar 2 13:47:39.025617 sshd[5432]: Accepted publickey for core from 10.0.0.1 port 58844 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:47:39.032147 sshd-session[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:47:39.057874 systemd-logind[1541]: New session 40 of user core. Mar 2 13:47:39.071412 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 2 13:47:39.493416 sshd[5436]: Connection closed by 10.0.0.1 port 58844 Mar 2 13:47:39.495368 sshd-session[5432]: pam_unix(sshd:session): session closed for user core Mar 2 13:47:39.504776 systemd[1]: sshd@39-10.0.0.75:22-10.0.0.1:58844.service: Deactivated successfully. Mar 2 13:47:39.510856 systemd[1]: session-40.scope: Deactivated successfully. Mar 2 13:47:39.519978 systemd-logind[1541]: Session 40 logged out. Waiting for processes to exit. Mar 2 13:47:39.530282 systemd-logind[1541]: Removed session 40. Mar 2 13:47:40.615775 kubelet[3001]: E0302 13:47:40.611062 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:47:41.636198 kubelet[3001]: E0302 13:47:41.626405 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:47:41.642162 kubelet[3001]: E0302 13:47:41.642130 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:47:44.548183 systemd[1]: Started sshd@40-10.0.0.75:22-10.0.0.1:36656.service - OpenSSH per-connection server daemon (10.0.0.1:36656). Mar 2 13:47:44.751178 sshd[5450]: Accepted publickey for core from 10.0.0.1 port 36656 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:47:44.757131 sshd-session[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:47:44.795430 systemd-logind[1541]: New session 41 of user core. Mar 2 13:47:44.808354 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 2 13:47:45.284903 sshd[5454]: Connection closed by 10.0.0.1 port 36656 Mar 2 13:47:45.287537 sshd-session[5450]: pam_unix(sshd:session): session closed for user core Mar 2 13:47:45.313177 systemd[1]: sshd@40-10.0.0.75:22-10.0.0.1:36656.service: Deactivated successfully. Mar 2 13:47:45.323260 systemd[1]: session-41.scope: Deactivated successfully. Mar 2 13:47:45.334888 systemd-logind[1541]: Session 41 logged out. Waiting for processes to exit. Mar 2 13:47:45.339342 systemd-logind[1541]: Removed session 41. Mar 2 13:47:47.618889 kubelet[3001]: E0302 13:47:47.616957 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:47:50.401840 systemd[1]: Started sshd@41-10.0.0.75:22-10.0.0.1:55700.service - OpenSSH per-connection server daemon (10.0.0.1:55700). Mar 2 13:47:50.724381 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 55700 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:47:50.734819 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:47:50.774251 systemd-logind[1541]: New session 42 of user core. Mar 2 13:47:50.819309 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 2 13:47:51.553298 sshd[5472]: Connection closed by 10.0.0.1 port 55700 Mar 2 13:47:51.557317 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Mar 2 13:47:51.579897 systemd[1]: sshd@41-10.0.0.75:22-10.0.0.1:55700.service: Deactivated successfully. Mar 2 13:47:51.617310 systemd[1]: session-42.scope: Deactivated successfully. Mar 2 13:47:51.650012 systemd-logind[1541]: Session 42 logged out. Waiting for processes to exit. Mar 2 13:47:51.673418 systemd-logind[1541]: Removed session 42. Mar 2 13:47:56.621355 systemd[1]: Started sshd@42-10.0.0.75:22-10.0.0.1:55734.service - OpenSSH per-connection server daemon (10.0.0.1:55734). Mar 2 13:47:57.141831 sshd[5489]: Accepted publickey for core from 10.0.0.1 port 55734 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:47:57.146149 sshd-session[5489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:47:57.234222 systemd-logind[1541]: New session 43 of user core. Mar 2 13:47:57.273743 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 2 13:47:58.013507 sshd[5493]: Connection closed by 10.0.0.1 port 55734 Mar 2 13:47:58.016072 sshd-session[5489]: pam_unix(sshd:session): session closed for user core Mar 2 13:47:58.045863 systemd[1]: sshd@42-10.0.0.75:22-10.0.0.1:55734.service: Deactivated successfully. Mar 2 13:47:58.059027 systemd[1]: session-43.scope: Deactivated successfully. Mar 2 13:47:58.078167 systemd-logind[1541]: Session 43 logged out. Waiting for processes to exit. Mar 2 13:47:58.101240 systemd-logind[1541]: Removed session 43. Mar 2 13:48:03.140322 systemd[1]: Started sshd@43-10.0.0.75:22-10.0.0.1:43760.service - OpenSSH per-connection server daemon (10.0.0.1:43760). Mar 2 13:48:03.517273 sshd[5507]: Accepted publickey for core from 10.0.0.1 port 43760 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:48:03.525067 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:48:03.599335 systemd-logind[1541]: New session 44 of user core. Mar 2 13:48:03.660331 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 2 13:48:04.637846 sshd[5510]: Connection closed by 10.0.0.1 port 43760 Mar 2 13:48:04.637988 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Mar 2 13:48:04.684161 systemd-logind[1541]: Session 44 logged out. Waiting for processes to exit. Mar 2 13:48:04.688338 systemd[1]: sshd@43-10.0.0.75:22-10.0.0.1:43760.service: Deactivated successfully. Mar 2 13:48:04.702297 systemd[1]: session-44.scope: Deactivated successfully. Mar 2 13:48:04.722773 systemd-logind[1541]: Removed session 44. Mar 2 13:48:09.756120 systemd[1]: Started sshd@44-10.0.0.75:22-10.0.0.1:43770.service - OpenSSH per-connection server daemon (10.0.0.1:43770). Mar 2 13:48:10.216063 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 43770 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:48:10.229295 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:48:10.374307 systemd-logind[1541]: New session 45 of user core. Mar 2 13:48:10.397903 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 2 13:48:11.412304 sshd[5526]: Connection closed by 10.0.0.1 port 43770 Mar 2 13:48:11.419885 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Mar 2 13:48:11.444539 systemd[1]: sshd@44-10.0.0.75:22-10.0.0.1:43770.service: Deactivated successfully. Mar 2 13:48:11.452965 systemd[1]: session-45.scope: Deactivated successfully. Mar 2 13:48:11.467888 systemd-logind[1541]: Session 45 logged out. Waiting for processes to exit. Mar 2 13:48:11.479147 systemd-logind[1541]: Removed session 45. Mar 2 13:48:12.623893 kubelet[3001]: E0302 13:48:12.621854 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:16.466098 systemd[1]: Started sshd@45-10.0.0.75:22-10.0.0.1:50746.service - OpenSSH per-connection server daemon (10.0.0.1:50746). Mar 2 13:48:17.089110 sshd[5539]: Accepted publickey for core from 10.0.0.1 port 50746 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:48:17.122355 sshd-session[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:48:17.205506 systemd-logind[1541]: New session 46 of user core. Mar 2 13:48:17.223322 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 2 13:48:18.682885 sshd[5542]: Connection closed by 10.0.0.1 port 50746 Mar 2 13:48:18.687003 sshd-session[5539]: pam_unix(sshd:session): session closed for user core Mar 2 13:48:18.734037 systemd[1]: sshd@45-10.0.0.75:22-10.0.0.1:50746.service: Deactivated successfully. Mar 2 13:48:18.769072 systemd[1]: session-46.scope: Deactivated successfully. Mar 2 13:48:18.806922 systemd-logind[1541]: Session 46 logged out. Waiting for processes to exit. Mar 2 13:48:18.840966 systemd-logind[1541]: Removed session 46. Mar 2 13:48:23.860261 systemd[1]: Started sshd@46-10.0.0.75:22-10.0.0.1:59724.service - OpenSSH per-connection server daemon (10.0.0.1:59724). Mar 2 13:48:24.321871 sshd[5557]: Accepted publickey for core from 10.0.0.1 port 59724 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:48:24.341841 sshd-session[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:48:24.405867 systemd-logind[1541]: New session 47 of user core. Mar 2 13:48:24.452536 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 2 13:48:25.721295 sshd[5560]: Connection closed by 10.0.0.1 port 59724 Mar 2 13:48:25.715210 sshd-session[5557]: pam_unix(sshd:session): session closed for user core Mar 2 13:48:25.801997 systemd[1]: sshd@46-10.0.0.75:22-10.0.0.1:59724.service: Deactivated successfully. Mar 2 13:48:25.886013 systemd[1]: session-47.scope: Deactivated successfully. Mar 2 13:48:25.925485 systemd-logind[1541]: Session 47 logged out. Waiting for processes to exit. Mar 2 13:48:25.969898 systemd-logind[1541]: Removed session 47. Mar 2 13:48:30.903145 systemd[1]: Started sshd@47-10.0.0.75:22-10.0.0.1:56194.service - OpenSSH per-connection server daemon (10.0.0.1:56194). Mar 2 13:48:31.888193 kubelet[3001]: E0302 13:48:31.804820 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:43.449454 kubelet[3001]: E0302 13:48:43.438262 3001 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 2 13:48:43.468055 systemd[1]: cri-containerd-aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60.scope: Deactivated successfully. Mar 2 13:48:43.483199 systemd[1]: cri-containerd-aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60.scope: Consumed 58.613s CPU time, 66.1M memory peak, 11.8M read from disk. Mar 2 13:48:43.737222 systemd[1]: cri-containerd-2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1.scope: Deactivated successfully. Mar 2 13:48:43.738131 systemd[1]: cri-containerd-2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1.scope: Consumed 36.329s CPU time, 24.5M memory peak, 284K read from disk. Mar 2 13:48:43.787463 kubelet[3001]: E0302 13:48:43.780228 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:43.788190 kubelet[3001]: E0302 13:48:43.788164 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:43.800827 kubelet[3001]: E0302 13:48:43.800790 3001 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Mar 2 13:48:43.818302 systemd[1]: cri-containerd-c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8.scope: Deactivated successfully. Mar 2 13:48:43.833012 systemd[1]: cri-containerd-c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8.scope: Consumed 7.068s CPU time, 30.8M memory peak, 4K written to disk. Mar 2 13:48:43.900160 containerd[1564]: time="2026-03-02T13:48:43.892135924Z" level=info msg="received container exit event container_id:\"aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60\" id:\"aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60\" pid:2831 exit_status:1 exited_at:{seconds:1772459323 nanos:845006132}" Mar 2 13:48:43.902035 kubelet[3001]: E0302 13:48:43.902003 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:44.087947 sshd[5576]: Accepted publickey for core from 10.0.0.1 port 56194 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:48:44.103863 containerd[1564]: time="2026-03-02T13:48:44.103813380Z" level=info msg="received container exit event container_id:\"2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1\" id:\"2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1\" pid:2852 exit_status:1 exited_at:{seconds:1772459324 nanos:80865432}" Mar 2 13:48:44.110138 containerd[1564]: time="2026-03-02T13:48:44.110102060Z" level=info msg="received container exit event container_id:\"c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8\" id:\"c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8\" pid:3769 exit_status:1 exited_at:{seconds:1772459324 nanos:75159803}" Mar 2 13:48:44.115145 sshd-session[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:48:44.244832 systemd-logind[1541]: New session 48 of user core. Mar 2 13:48:44.266200 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 2 13:48:45.278905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8-rootfs.mount: Deactivated successfully. Mar 2 13:48:45.279074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1-rootfs.mount: Deactivated successfully. Mar 2 13:48:45.279197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60-rootfs.mount: Deactivated successfully. Mar 2 13:48:45.326832 sshd[5581]: Connection closed by 10.0.0.1 port 56194 Mar 2 13:48:45.312500 sshd-session[5576]: pam_unix(sshd:session): session closed for user core Mar 2 13:48:45.379055 systemd[1]: sshd@47-10.0.0.75:22-10.0.0.1:56194.service: Deactivated successfully. Mar 2 13:48:45.406186 systemd[1]: session-48.scope: Deactivated successfully. Mar 2 13:48:45.426266 systemd-logind[1541]: Session 48 logged out. Waiting for processes to exit. Mar 2 13:48:45.496522 systemd-logind[1541]: Removed session 48. Mar 2 13:48:45.878719 kubelet[3001]: I0302 13:48:45.874300 3001 scope.go:117] "RemoveContainer" containerID="c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8" Mar 2 13:48:45.878719 kubelet[3001]: E0302 13:48:45.874517 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:45.916970 containerd[1564]: time="2026-03-02T13:48:45.906076597Z" level=info msg="CreateContainer within sandbox \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Mar 2 13:48:45.938078 kubelet[3001]: I0302 13:48:45.915759 3001 scope.go:117] "RemoveContainer" containerID="2ec1b3a4e0ba4f87ff28f328f7a41edef7ef16041fa30bccab9ea40740b0e1f1" Mar 2 13:48:45.938078 kubelet[3001]: E0302 13:48:45.915855 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:45.992925 kubelet[3001]: I0302 13:48:45.987090 3001 scope.go:117] "RemoveContainer" containerID="aa143933251b4bd3f079dcfd4eb34350e7aa3fff923af5a6b8793f19097d5f60" Mar 2 13:48:45.992925 kubelet[3001]: E0302 13:48:45.987183 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:46.036678 containerd[1564]: time="2026-03-02T13:48:46.018274953Z" level=info msg="CreateContainer within sandbox \"84ad3ea4bbc0bb53ff847c5f69fe1e73b875421325a49f2281e7ce5ccc7862da\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 2 13:48:46.095794 containerd[1564]: time="2026-03-02T13:48:46.048180606Z" level=info msg="CreateContainer within sandbox \"6278eededc694c0639d8d529c95c5abda60d7412d5bfc221feeab4f6b69ca4c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 2 13:48:46.456823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2802303738.mount: Deactivated successfully. Mar 2 13:48:46.475901 containerd[1564]: time="2026-03-02T13:48:46.471010108Z" level=info msg="Container 5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:48:46.500853 containerd[1564]: time="2026-03-02T13:48:46.500804694Z" level=info msg="Container 9b8f63c9a5e90744ede49472ef359971a6dfd9a7a8e4bb4e2f2af5ae8c6f9717: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:48:46.598910 containerd[1564]: time="2026-03-02T13:48:46.598094253Z" level=info msg="Container 3fd3c06e4d2e28ed69b2b08b7da961881be8fc46d3f92eefe73e004e00563556: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:48:46.652880 containerd[1564]: time="2026-03-02T13:48:46.650512895Z" level=info msg="CreateContainer within sandbox \"6278eededc694c0639d8d529c95c5abda60d7412d5bfc221feeab4f6b69ca4c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9b8f63c9a5e90744ede49472ef359971a6dfd9a7a8e4bb4e2f2af5ae8c6f9717\"" Mar 2 13:48:46.664517 containerd[1564]: time="2026-03-02T13:48:46.663897567Z" level=info msg="StartContainer for \"9b8f63c9a5e90744ede49472ef359971a6dfd9a7a8e4bb4e2f2af5ae8c6f9717\"" Mar 2 13:48:46.696174 containerd[1564]: time="2026-03-02T13:48:46.693776450Z" level=info msg="connecting to shim 9b8f63c9a5e90744ede49472ef359971a6dfd9a7a8e4bb4e2f2af5ae8c6f9717" address="unix:///run/containerd/s/e4788b7f404ccb0b7394a176c45007151da80fe32850f4cecc88ffea6316164b" protocol=ttrpc version=3 Mar 2 13:48:46.708843 containerd[1564]: time="2026-03-02T13:48:46.700506731Z" level=info msg="CreateContainer within sandbox \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b\"" Mar 2 13:48:46.708843 containerd[1564]: time="2026-03-02T13:48:46.703167396Z" level=info msg="StartContainer for \"5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b\"" Mar 2 13:48:46.738730 containerd[1564]: time="2026-03-02T13:48:46.729268859Z" level=info msg="connecting to shim 5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b" address="unix:///run/containerd/s/8e714ef65bbef10be3d66f29fb213ef98aaeb55061fbdde9e11427f8c49ba948" protocol=ttrpc version=3 Mar 2 13:48:46.851978 containerd[1564]: time="2026-03-02T13:48:46.851921402Z" level=info msg="CreateContainer within sandbox \"84ad3ea4bbc0bb53ff847c5f69fe1e73b875421325a49f2281e7ce5ccc7862da\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3fd3c06e4d2e28ed69b2b08b7da961881be8fc46d3f92eefe73e004e00563556\"" Mar 2 13:48:46.909495 containerd[1564]: time="2026-03-02T13:48:46.893495379Z" level=info msg="StartContainer for \"3fd3c06e4d2e28ed69b2b08b7da961881be8fc46d3f92eefe73e004e00563556\"" Mar 2 13:48:46.936803 containerd[1564]: time="2026-03-02T13:48:46.936534345Z" level=info msg="connecting to shim 3fd3c06e4d2e28ed69b2b08b7da961881be8fc46d3f92eefe73e004e00563556" address="unix:///run/containerd/s/af7d3b1aa0594599de36ac24ac1f16cf28c161c288013573aa8ffc7494323903" protocol=ttrpc version=3 Mar 2 13:48:47.012938 systemd[1]: Started cri-containerd-9b8f63c9a5e90744ede49472ef359971a6dfd9a7a8e4bb4e2f2af5ae8c6f9717.scope - libcontainer container 9b8f63c9a5e90744ede49472ef359971a6dfd9a7a8e4bb4e2f2af5ae8c6f9717. Mar 2 13:48:47.162074 systemd[1]: Started cri-containerd-5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b.scope - libcontainer container 5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b. Mar 2 13:48:47.176448 systemd[1]: Started cri-containerd-3fd3c06e4d2e28ed69b2b08b7da961881be8fc46d3f92eefe73e004e00563556.scope - libcontainer container 3fd3c06e4d2e28ed69b2b08b7da961881be8fc46d3f92eefe73e004e00563556. Mar 2 13:48:47.955738 containerd[1564]: time="2026-03-02T13:48:47.941256653Z" level=info msg="StartContainer for \"9b8f63c9a5e90744ede49472ef359971a6dfd9a7a8e4bb4e2f2af5ae8c6f9717\" returns successfully" Mar 2 13:48:48.432865 kubelet[3001]: E0302 13:48:48.417226 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:48.582084 containerd[1564]: time="2026-03-02T13:48:48.580843271Z" level=info msg="StartContainer for \"5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b\" returns successfully" Mar 2 13:48:48.621904 kubelet[3001]: E0302 13:48:48.617851 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:48.738919 containerd[1564]: time="2026-03-02T13:48:48.737456309Z" level=info msg="StartContainer for \"3fd3c06e4d2e28ed69b2b08b7da961881be8fc46d3f92eefe73e004e00563556\" returns successfully" Mar 2 13:48:49.458839 kubelet[3001]: E0302 13:48:49.455742 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:49.509771 kubelet[3001]: E0302 13:48:49.502078 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:49.509995 kubelet[3001]: E0302 13:48:49.509972 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:49.625812 kubelet[3001]: E0302 13:48:49.625767 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:50.402184 systemd[1]: Started sshd@48-10.0.0.75:22-10.0.0.1:56134.service - OpenSSH per-connection server daemon (10.0.0.1:56134). Mar 2 13:48:50.533751 kubelet[3001]: E0302 13:48:50.527263 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:50.548855 kubelet[3001]: E0302 13:48:50.530839 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:50.874971 sshd[5728]: Accepted publickey for core from 10.0.0.1 port 56134 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:48:50.882239 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:48:50.935087 systemd-logind[1541]: New session 49 of user core. Mar 2 13:48:50.956958 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 2 13:48:51.649077 sshd[5731]: Connection closed by 10.0.0.1 port 56134 Mar 2 13:48:51.672898 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Mar 2 13:48:51.696532 systemd[1]: sshd@48-10.0.0.75:22-10.0.0.1:56134.service: Deactivated successfully. Mar 2 13:48:51.696827 systemd-logind[1541]: Session 49 logged out. Waiting for processes to exit. Mar 2 13:48:51.715018 systemd[1]: session-49.scope: Deactivated successfully. Mar 2 13:48:51.737101 systemd-logind[1541]: Removed session 49. Mar 2 13:48:53.756495 kubelet[3001]: E0302 13:48:53.753169 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:48:56.774928 systemd[1]: Started sshd@49-10.0.0.75:22-10.0.0.1:56200.service - OpenSSH per-connection server daemon (10.0.0.1:56200). Mar 2 13:48:57.500927 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 56200 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:48:57.559533 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:48:57.637997 systemd-logind[1541]: New session 50 of user core. Mar 2 13:48:57.718424 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 2 13:48:59.146149 sshd[5754]: Connection closed by 10.0.0.1 port 56200 Mar 2 13:48:59.149039 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Mar 2 13:48:59.197199 systemd[1]: sshd@49-10.0.0.75:22-10.0.0.1:56200.service: Deactivated successfully. Mar 2 13:48:59.222518 systemd[1]: session-50.scope: Deactivated successfully. Mar 2 13:48:59.231860 systemd-logind[1541]: Session 50 logged out. Waiting for processes to exit. Mar 2 13:48:59.265024 systemd-logind[1541]: Removed session 50. Mar 2 13:48:59.352067 kubelet[3001]: E0302 13:48:59.351164 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:49:00.621805 kubelet[3001]: E0302 13:49:00.620835 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:49:03.826903 kubelet[3001]: E0302 13:49:03.821485 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:49:03.965041 kubelet[3001]: E0302 13:49:03.962069 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:49:04.246861 systemd[1]: Started sshd@50-10.0.0.75:22-10.0.0.1:56386.service - OpenSSH per-connection server daemon (10.0.0.1:56386). Mar 2 13:49:04.749844 sshd[5768]: Accepted publickey for core from 10.0.0.1 port 56386 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:49:04.762118 sshd-session[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:49:04.867922 systemd-logind[1541]: New session 51 of user core. Mar 2 13:49:04.938011 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 2 13:49:06.222980 sshd[5771]: Connection closed by 10.0.0.1 port 56386 Mar 2 13:49:06.230047 sshd-session[5768]: pam_unix(sshd:session): session closed for user core Mar 2 13:49:06.279489 systemd[1]: sshd@50-10.0.0.75:22-10.0.0.1:56386.service: Deactivated successfully. Mar 2 13:49:06.316852 systemd[1]: session-51.scope: Deactivated successfully. Mar 2 13:49:06.334893 systemd-logind[1541]: Session 51 logged out. Waiting for processes to exit. Mar 2 13:49:06.414411 systemd-logind[1541]: Removed session 51. Mar 2 13:49:11.300140 systemd[1]: Started sshd@51-10.0.0.75:22-10.0.0.1:42216.service - OpenSSH per-connection server daemon (10.0.0.1:42216). Mar 2 13:49:11.859434 sshd[5785]: Accepted publickey for core from 10.0.0.1 port 42216 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:49:11.881076 sshd-session[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:49:11.968028 systemd-logind[1541]: New session 52 of user core. Mar 2 13:49:11.994894 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 2 13:49:13.075978 sshd[5788]: Connection closed by 10.0.0.1 port 42216 Mar 2 13:49:13.084938 sshd-session[5785]: pam_unix(sshd:session): session closed for user core Mar 2 13:49:13.121034 systemd[1]: sshd@51-10.0.0.75:22-10.0.0.1:42216.service: Deactivated successfully. Mar 2 13:49:13.157219 systemd[1]: session-52.scope: Deactivated successfully. Mar 2 13:49:13.173837 systemd-logind[1541]: Session 52 logged out. Waiting for processes to exit. Mar 2 13:49:13.196425 systemd-logind[1541]: Removed session 52. Mar 2 13:49:19.038021 systemd[1]: Started sshd@52-10.0.0.75:22-10.0.0.1:42232.service - OpenSSH per-connection server daemon (10.0.0.1:42232). Mar 2 13:49:19.140922 kubelet[3001]: E0302 13:49:19.140520 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.718s" Mar 2 13:49:19.909215 sshd[5802]: Accepted publickey for core from 10.0.0.1 port 42232 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:49:19.947063 sshd-session[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:49:20.125154 systemd-logind[1541]: New session 53 of user core. Mar 2 13:49:20.163147 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 2 13:49:24.504828 sshd[5806]: Connection closed by 10.0.0.1 port 42232 Mar 2 13:49:24.505933 sshd-session[5802]: pam_unix(sshd:session): session closed for user core Mar 2 13:49:24.536963 systemd[1]: sshd@52-10.0.0.75:22-10.0.0.1:42232.service: Deactivated successfully. Mar 2 13:49:24.552383 systemd[1]: session-53.scope: Deactivated successfully. Mar 2 13:49:24.575908 systemd-logind[1541]: Session 53 logged out. Waiting for processes to exit. Mar 2 13:49:24.594423 systemd-logind[1541]: Removed session 53. Mar 2 13:49:29.744146 systemd[1]: Started sshd@53-10.0.0.75:22-10.0.0.1:44084.service - OpenSSH per-connection server daemon (10.0.0.1:44084). Mar 2 13:49:30.645816 sshd[5824]: Accepted publickey for core from 10.0.0.1 port 44084 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:49:30.652766 sshd-session[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:49:30.706769 systemd-logind[1541]: New session 54 of user core. Mar 2 13:49:30.730398 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 2 13:49:31.707140 sshd[5827]: Connection closed by 10.0.0.1 port 44084 Mar 2 13:49:31.706966 sshd-session[5824]: pam_unix(sshd:session): session closed for user core Mar 2 13:49:31.736943 systemd[1]: sshd@53-10.0.0.75:22-10.0.0.1:44084.service: Deactivated successfully. Mar 2 13:49:31.765183 systemd[1]: session-54.scope: Deactivated successfully. Mar 2 13:49:31.777187 systemd-logind[1541]: Session 54 logged out. Waiting for processes to exit. Mar 2 13:49:31.819130 systemd-logind[1541]: Removed session 54. Mar 2 13:49:36.806447 systemd[1]: Started sshd@54-10.0.0.75:22-10.0.0.1:43036.service - OpenSSH per-connection server daemon (10.0.0.1:43036). Mar 2 13:49:37.164490 sshd[5842]: Accepted publickey for core from 10.0.0.1 port 43036 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:49:37.166405 sshd-session[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:49:37.210747 systemd-logind[1541]: New session 55 of user core. Mar 2 13:49:37.237054 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 2 13:49:38.185724 sshd[5845]: Connection closed by 10.0.0.1 port 43036 Mar 2 13:49:38.188971 sshd-session[5842]: pam_unix(sshd:session): session closed for user core Mar 2 13:49:38.227163 systemd[1]: sshd@54-10.0.0.75:22-10.0.0.1:43036.service: Deactivated successfully. Mar 2 13:49:38.253014 systemd[1]: session-55.scope: Deactivated successfully. Mar 2 13:49:38.264977 systemd-logind[1541]: Session 55 logged out. Waiting for processes to exit. Mar 2 13:49:38.287327 systemd-logind[1541]: Removed session 55. Mar 2 13:49:43.296060 systemd[1]: Started sshd@55-10.0.0.75:22-10.0.0.1:34780.service - OpenSSH per-connection server daemon (10.0.0.1:34780). Mar 2 13:49:43.695431 sshd[5860]: Accepted publickey for core from 10.0.0.1 port 34780 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:49:43.704884 sshd-session[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:49:43.770972 systemd-logind[1541]: New session 56 of user core. Mar 2 13:49:43.804785 systemd[1]: Started session-56.scope - Session 56 of User core. Mar 2 13:49:44.809843 sshd[5863]: Connection closed by 10.0.0.1 port 34780 Mar 2 13:49:44.808039 sshd-session[5860]: pam_unix(sshd:session): session closed for user core Mar 2 13:49:44.836541 systemd[1]: sshd@55-10.0.0.75:22-10.0.0.1:34780.service: Deactivated successfully. Mar 2 13:49:44.861443 systemd[1]: session-56.scope: Deactivated successfully. Mar 2 13:49:44.887852 systemd-logind[1541]: Session 56 logged out. Waiting for processes to exit. Mar 2 13:49:44.904040 systemd-logind[1541]: Removed session 56. Mar 2 13:49:46.615823 kubelet[3001]: E0302 13:49:46.613308 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:49:49.884725 systemd[1]: Started sshd@56-10.0.0.75:22-10.0.0.1:34790.service - OpenSSH per-connection server daemon (10.0.0.1:34790). Mar 2 13:49:50.277023 sshd[5878]: Accepted publickey for core from 10.0.0.1 port 34790 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:49:50.285934 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:49:50.336297 systemd-logind[1541]: New session 57 of user core. Mar 2 13:49:50.368990 systemd[1]: Started session-57.scope - Session 57 of User core. Mar 2 13:49:51.076387 sshd[5881]: Connection closed by 10.0.0.1 port 34790 Mar 2 13:49:51.091975 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Mar 2 13:49:51.113718 systemd[1]: sshd@56-10.0.0.75:22-10.0.0.1:34790.service: Deactivated successfully. Mar 2 13:49:51.141479 systemd[1]: session-57.scope: Deactivated successfully. Mar 2 13:49:51.153750 systemd-logind[1541]: Session 57 logged out. Waiting for processes to exit. Mar 2 13:49:51.159380 systemd-logind[1541]: Removed session 57. Mar 2 13:49:51.608844 kubelet[3001]: E0302 13:49:51.607954 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:49:56.144926 systemd[1]: Started sshd@57-10.0.0.75:22-10.0.0.1:41290.service - OpenSSH per-connection server daemon (10.0.0.1:41290). Mar 2 13:49:56.410332 sshd[5897]: Accepted publickey for core from 10.0.0.1 port 41290 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:49:56.424489 sshd-session[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:49:56.483921 systemd-logind[1541]: New session 58 of user core. Mar 2 13:49:56.501297 systemd[1]: Started session-58.scope - Session 58 of User core. Mar 2 13:49:57.441050 sshd[5900]: Connection closed by 10.0.0.1 port 41290 Mar 2 13:49:57.443985 sshd-session[5897]: pam_unix(sshd:session): session closed for user core Mar 2 13:49:57.495021 systemd[1]: sshd@57-10.0.0.75:22-10.0.0.1:41290.service: Deactivated successfully. Mar 2 13:49:57.514861 systemd[1]: session-58.scope: Deactivated successfully. Mar 2 13:49:57.539854 systemd-logind[1541]: Session 58 logged out. Waiting for processes to exit. Mar 2 13:49:57.552535 systemd[1]: Started sshd@58-10.0.0.75:22-10.0.0.1:41304.service - OpenSSH per-connection server daemon (10.0.0.1:41304). Mar 2 13:49:57.565964 systemd-logind[1541]: Removed session 58. Mar 2 13:49:57.892943 sshd[5913]: Accepted publickey for core from 10.0.0.1 port 41304 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:49:57.896398 sshd-session[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:49:57.933098 systemd-logind[1541]: New session 59 of user core. Mar 2 13:49:57.962958 systemd[1]: Started session-59.scope - Session 59 of User core. Mar 2 13:50:00.822012 sshd[5916]: Connection closed by 10.0.0.1 port 41304 Mar 2 13:50:00.828364 sshd-session[5913]: pam_unix(sshd:session): session closed for user core Mar 2 13:50:00.914971 systemd[1]: sshd@58-10.0.0.75:22-10.0.0.1:41304.service: Deactivated successfully. Mar 2 13:50:00.947159 systemd[1]: session-59.scope: Deactivated successfully. Mar 2 13:50:00.958095 systemd-logind[1541]: Session 59 logged out. Waiting for processes to exit. Mar 2 13:50:00.982337 systemd[1]: Started sshd@59-10.0.0.75:22-10.0.0.1:42092.service - OpenSSH per-connection server daemon (10.0.0.1:42092). Mar 2 13:50:01.018342 systemd-logind[1541]: Removed session 59. Mar 2 13:50:01.533337 sshd[5927]: Accepted publickey for core from 10.0.0.1 port 42092 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:50:01.551026 sshd-session[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:50:01.616721 systemd-logind[1541]: New session 60 of user core. Mar 2 13:50:01.639290 systemd[1]: Started session-60.scope - Session 60 of User core. Mar 2 13:50:02.621758 kubelet[3001]: E0302 13:50:02.619728 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:50:04.612162 kubelet[3001]: E0302 13:50:04.612122 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:50:06.997746 sshd[5930]: Connection closed by 10.0.0.1 port 42092 Mar 2 13:50:07.001292 sshd-session[5927]: pam_unix(sshd:session): session closed for user core Mar 2 13:50:07.072937 systemd[1]: Started sshd@60-10.0.0.75:22-10.0.0.1:42116.service - OpenSSH per-connection server daemon (10.0.0.1:42116). Mar 2 13:50:07.074065 systemd[1]: sshd@59-10.0.0.75:22-10.0.0.1:42092.service: Deactivated successfully. Mar 2 13:50:07.128725 systemd[1]: session-60.scope: Deactivated successfully. Mar 2 13:50:07.129307 systemd[1]: session-60.scope: Consumed 1.649s CPU time, 45.2M memory peak. Mar 2 13:50:07.197017 systemd-logind[1541]: Session 60 logged out. Waiting for processes to exit. Mar 2 13:50:07.261997 systemd-logind[1541]: Removed session 60. Mar 2 13:50:07.856700 sshd[5948]: Accepted publickey for core from 10.0.0.1 port 42116 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:50:07.893545 sshd-session[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:50:08.034992 systemd-logind[1541]: New session 61 of user core. Mar 2 13:50:08.055361 systemd[1]: Started session-61.scope - Session 61 of User core. Mar 2 13:50:08.614735 kubelet[3001]: E0302 13:50:08.613005 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:50:09.387379 sshd[5955]: Connection closed by 10.0.0.1 port 42116 Mar 2 13:50:09.384487 sshd-session[5948]: pam_unix(sshd:session): session closed for user core Mar 2 13:50:09.486350 systemd[1]: sshd@60-10.0.0.75:22-10.0.0.1:42116.service: Deactivated successfully. Mar 2 13:50:09.530399 systemd[1]: session-61.scope: Deactivated successfully. Mar 2 13:50:09.539944 systemd-logind[1541]: Session 61 logged out. Waiting for processes to exit. Mar 2 13:50:09.545867 systemd[1]: Started sshd@61-10.0.0.75:22-10.0.0.1:42130.service - OpenSSH per-connection server daemon (10.0.0.1:42130). Mar 2 13:50:09.668958 systemd-logind[1541]: Removed session 61. Mar 2 13:50:10.195084 sshd[5967]: Accepted publickey for core from 10.0.0.1 port 42130 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:50:10.208802 sshd-session[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:50:10.301792 systemd-logind[1541]: New session 62 of user core. Mar 2 13:50:10.338117 systemd[1]: Started session-62.scope - Session 62 of User core. Mar 2 13:50:10.699108 kubelet[3001]: E0302 13:50:10.699048 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:50:11.581889 sshd[5971]: Connection closed by 10.0.0.1 port 42130 Mar 2 13:50:11.562157 sshd-session[5967]: pam_unix(sshd:session): session closed for user core Mar 2 13:50:11.603499 systemd[1]: sshd@61-10.0.0.75:22-10.0.0.1:42130.service: Deactivated successfully. Mar 2 13:50:11.638071 systemd[1]: session-62.scope: Deactivated successfully. Mar 2 13:50:11.650778 systemd-logind[1541]: Session 62 logged out. Waiting for processes to exit. Mar 2 13:50:11.681951 systemd-logind[1541]: Removed session 62. Mar 2 13:50:16.644198 kubelet[3001]: E0302 13:50:16.643945 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:50:16.695540 systemd[1]: Started sshd@62-10.0.0.75:22-10.0.0.1:37156.service - OpenSSH per-connection server daemon (10.0.0.1:37156). Mar 2 13:50:17.198118 sshd[5986]: Accepted publickey for core from 10.0.0.1 port 37156 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:50:17.226437 sshd-session[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:50:17.288921 systemd-logind[1541]: New session 63 of user core. Mar 2 13:50:17.332381 systemd[1]: Started session-63.scope - Session 63 of User core. Mar 2 13:50:18.149823 sshd[5989]: Connection closed by 10.0.0.1 port 37156 Mar 2 13:50:18.145094 sshd-session[5986]: pam_unix(sshd:session): session closed for user core Mar 2 13:50:18.188963 systemd-logind[1541]: Session 63 logged out. Waiting for processes to exit. Mar 2 13:50:18.209810 systemd[1]: sshd@62-10.0.0.75:22-10.0.0.1:37156.service: Deactivated successfully. Mar 2 13:50:18.230186 systemd[1]: session-63.scope: Deactivated successfully. Mar 2 13:50:18.277923 systemd-logind[1541]: Removed session 63. Mar 2 13:50:23.256177 systemd[1]: Started sshd@63-10.0.0.75:22-10.0.0.1:52112.service - OpenSSH per-connection server daemon (10.0.0.1:52112). Mar 2 13:50:23.825497 sshd[6004]: Accepted publickey for core from 10.0.0.1 port 52112 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:50:23.856947 sshd-session[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:50:23.926789 systemd-logind[1541]: New session 64 of user core. Mar 2 13:50:23.961958 systemd[1]: Started session-64.scope - Session 64 of User core. Mar 2 13:50:25.214431 sshd[6007]: Connection closed by 10.0.0.1 port 52112 Mar 2 13:50:25.206319 sshd-session[6004]: pam_unix(sshd:session): session closed for user core Mar 2 13:50:25.248448 systemd[1]: sshd@63-10.0.0.75:22-10.0.0.1:52112.service: Deactivated successfully. Mar 2 13:50:25.282102 systemd[1]: session-64.scope: Deactivated successfully. Mar 2 13:50:25.310315 systemd-logind[1541]: Session 64 logged out. Waiting for processes to exit. Mar 2 13:50:25.320976 systemd-logind[1541]: Removed session 64. Mar 2 13:50:28.742328 kubelet[3001]: E0302 13:50:28.738407 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:50:30.314812 systemd[1]: Started sshd@64-10.0.0.75:22-10.0.0.1:58936.service - OpenSSH per-connection server daemon (10.0.0.1:58936). Mar 2 13:50:30.776828 sshd[6023]: Accepted publickey for core from 10.0.0.1 port 58936 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:50:30.772023 sshd-session[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:50:30.829051 systemd-logind[1541]: New session 65 of user core. Mar 2 13:50:30.895046 systemd[1]: Started session-65.scope - Session 65 of User core. Mar 2 13:50:32.381164 sshd[6026]: Connection closed by 10.0.0.1 port 58936 Mar 2 13:50:32.380075 sshd-session[6023]: pam_unix(sshd:session): session closed for user core Mar 2 13:50:32.493962 systemd[1]: sshd@64-10.0.0.75:22-10.0.0.1:58936.service: Deactivated successfully. Mar 2 13:50:32.494317 systemd-logind[1541]: Session 65 logged out. Waiting for processes to exit. Mar 2 13:50:32.552091 systemd[1]: session-65.scope: Deactivated successfully. Mar 2 13:50:32.603397 systemd-logind[1541]: Removed session 65. Mar 2 13:50:37.510955 systemd[1]: Started sshd@65-10.0.0.75:22-10.0.0.1:59002.service - OpenSSH per-connection server daemon (10.0.0.1:59002). Mar 2 13:50:37.934170 sshd[6040]: Accepted publickey for core from 10.0.0.1 port 59002 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:50:37.943727 sshd-session[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:50:38.032996 systemd-logind[1541]: New session 66 of user core. Mar 2 13:50:38.061137 systemd[1]: Started session-66.scope - Session 66 of User core. Mar 2 13:50:38.789537 sshd[6043]: Connection closed by 10.0.0.1 port 59002 Mar 2 13:50:38.804089 sshd-session[6040]: pam_unix(sshd:session): session closed for user core Mar 2 13:50:38.835800 systemd[1]: sshd@65-10.0.0.75:22-10.0.0.1:59002.service: Deactivated successfully. Mar 2 13:50:38.848967 systemd[1]: session-66.scope: Deactivated successfully. Mar 2 13:50:38.857719 systemd-logind[1541]: Session 66 logged out. Waiting for processes to exit. Mar 2 13:50:38.866830 systemd-logind[1541]: Removed session 66. Mar 2 13:50:43.884829 systemd[1]: Started sshd@66-10.0.0.75:22-10.0.0.1:41676.service - OpenSSH per-connection server daemon (10.0.0.1:41676). Mar 2 13:50:44.212997 sshd[6058]: Accepted publickey for core from 10.0.0.1 port 41676 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:50:44.261425 sshd-session[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:50:44.340029 systemd-logind[1541]: New session 67 of user core. Mar 2 13:50:44.361902 systemd[1]: Started session-67.scope - Session 67 of User core. Mar 2 13:50:45.465875 sshd[6061]: Connection closed by 10.0.0.1 port 41676 Mar 2 13:50:45.467949 sshd-session[6058]: pam_unix(sshd:session): session closed for user core Mar 2 13:50:45.494781 systemd[1]: sshd@66-10.0.0.75:22-10.0.0.1:41676.service: Deactivated successfully. Mar 2 13:50:45.505098 systemd[1]: session-67.scope: Deactivated successfully. Mar 2 13:50:45.508989 systemd-logind[1541]: Session 67 logged out. Waiting for processes to exit. Mar 2 13:50:45.538809 systemd-logind[1541]: Removed session 67. Mar 2 13:50:50.597460 systemd[1]: Started sshd@67-10.0.0.75:22-10.0.0.1:46484.service - OpenSSH per-connection server daemon (10.0.0.1:46484). Mar 2 13:50:51.199400 sshd[6076]: Accepted publickey for core from 10.0.0.1 port 46484 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:50:51.243483 sshd-session[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:50:51.334077 systemd-logind[1541]: New session 68 of user core. Mar 2 13:50:51.404777 systemd[1]: Started session-68.scope - Session 68 of User core. Mar 2 13:50:52.773542 sshd[6079]: Connection closed by 10.0.0.1 port 46484 Mar 2 13:50:52.773995 sshd-session[6076]: pam_unix(sshd:session): session closed for user core Mar 2 13:50:52.809061 systemd[1]: sshd@67-10.0.0.75:22-10.0.0.1:46484.service: Deactivated successfully. Mar 2 13:50:52.837326 systemd[1]: session-68.scope: Deactivated successfully. Mar 2 13:50:52.886755 systemd-logind[1541]: Session 68 logged out. Waiting for processes to exit. Mar 2 13:50:52.914006 systemd-logind[1541]: Removed session 68. Mar 2 13:50:57.957400 systemd[1]: Started sshd@68-10.0.0.75:22-10.0.0.1:46540.service - OpenSSH per-connection server daemon (10.0.0.1:46540). Mar 2 13:50:58.414967 sshd[6095]: Accepted publickey for core from 10.0.0.1 port 46540 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:50:58.420094 sshd-session[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:50:58.496760 systemd-logind[1541]: New session 69 of user core. Mar 2 13:50:58.525072 systemd[1]: Started session-69.scope - Session 69 of User core. Mar 2 13:50:59.921146 sshd[6098]: Connection closed by 10.0.0.1 port 46540 Mar 2 13:50:59.935011 sshd-session[6095]: pam_unix(sshd:session): session closed for user core Mar 2 13:50:59.996436 systemd[1]: sshd@68-10.0.0.75:22-10.0.0.1:46540.service: Deactivated successfully. Mar 2 13:51:00.017857 systemd[1]: session-69.scope: Deactivated successfully. Mar 2 13:51:00.053163 systemd-logind[1541]: Session 69 logged out. Waiting for processes to exit. Mar 2 13:51:00.111465 systemd-logind[1541]: Removed session 69. Mar 2 13:51:02.961482 kubelet[3001]: E0302 13:51:02.957096 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:51:05.067979 systemd[1]: Started sshd@69-10.0.0.75:22-10.0.0.1:45964.service - OpenSSH per-connection server daemon (10.0.0.1:45964). Mar 2 13:51:05.534939 sshd[6111]: Accepted publickey for core from 10.0.0.1 port 45964 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:51:05.543026 sshd-session[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:51:05.640078 systemd-logind[1541]: New session 70 of user core. Mar 2 13:51:05.709012 systemd[1]: Started session-70.scope - Session 70 of User core. Mar 2 13:51:07.202737 sshd[6114]: Connection closed by 10.0.0.1 port 45964 Mar 2 13:51:07.202065 sshd-session[6111]: pam_unix(sshd:session): session closed for user core Mar 2 13:51:07.241537 systemd[1]: sshd@69-10.0.0.75:22-10.0.0.1:45964.service: Deactivated successfully. Mar 2 13:51:07.261147 systemd[1]: session-70.scope: Deactivated successfully. Mar 2 13:51:07.285391 systemd-logind[1541]: Session 70 logged out. Waiting for processes to exit. Mar 2 13:51:07.308105 systemd-logind[1541]: Removed session 70. Mar 2 13:51:11.614907 kubelet[3001]: E0302 13:51:11.611923 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:51:12.337503 systemd[1]: Started sshd@70-10.0.0.75:22-10.0.0.1:46208.service - OpenSSH per-connection server daemon (10.0.0.1:46208). Mar 2 13:51:12.783807 sshd[6127]: Accepted publickey for core from 10.0.0.1 port 46208 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:51:12.806736 sshd-session[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:51:12.879992 systemd-logind[1541]: New session 71 of user core. Mar 2 13:51:12.941987 systemd[1]: Started session-71.scope - Session 71 of User core. Mar 2 13:51:14.368414 sshd[6130]: Connection closed by 10.0.0.1 port 46208 Mar 2 13:51:14.368252 sshd-session[6127]: pam_unix(sshd:session): session closed for user core Mar 2 13:51:14.416049 systemd[1]: sshd@70-10.0.0.75:22-10.0.0.1:46208.service: Deactivated successfully. Mar 2 13:51:14.430261 systemd[1]: session-71.scope: Deactivated successfully. Mar 2 13:51:14.507177 systemd-logind[1541]: Session 71 logged out. Waiting for processes to exit. Mar 2 13:51:14.541109 systemd-logind[1541]: Removed session 71. Mar 2 13:51:15.616007 kubelet[3001]: E0302 13:51:15.615386 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:51:15.635841 kubelet[3001]: E0302 13:51:15.628413 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:51:19.506134 systemd[1]: Started sshd@71-10.0.0.75:22-10.0.0.1:46246.service - OpenSSH per-connection server daemon (10.0.0.1:46246). Mar 2 13:51:20.044052 sshd[6143]: Accepted publickey for core from 10.0.0.1 port 46246 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:51:20.076836 sshd-session[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:51:20.151510 systemd-logind[1541]: New session 72 of user core. Mar 2 13:51:20.216928 systemd[1]: Started session-72.scope - Session 72 of User core. Mar 2 13:51:21.997659 sshd[6146]: Connection closed by 10.0.0.1 port 46246 Mar 2 13:51:21.999941 sshd-session[6143]: pam_unix(sshd:session): session closed for user core Mar 2 13:51:22.029923 systemd[1]: sshd@71-10.0.0.75:22-10.0.0.1:46246.service: Deactivated successfully. Mar 2 13:51:22.053292 systemd[1]: session-72.scope: Deactivated successfully. Mar 2 13:51:22.073096 systemd-logind[1541]: Session 72 logged out. Waiting for processes to exit. Mar 2 13:51:22.099172 systemd-logind[1541]: Removed session 72. Mar 2 13:51:27.047013 systemd[1]: Started sshd@72-10.0.0.75:22-10.0.0.1:34902.service - OpenSSH per-connection server daemon (10.0.0.1:34902). Mar 2 13:51:27.349609 sshd[6163]: Accepted publickey for core from 10.0.0.1 port 34902 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:51:27.368816 sshd-session[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:51:27.430757 systemd-logind[1541]: New session 73 of user core. Mar 2 13:51:27.449840 systemd[1]: Started session-73.scope - Session 73 of User core. Mar 2 13:51:30.179264 kubelet[3001]: E0302 13:51:30.175242 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:51:30.179264 kubelet[3001]: E0302 13:51:30.261171 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.026s" Mar 2 13:51:35.437138 kubelet[3001]: E0302 13:51:35.437034 3001 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.724s" Mar 2 13:51:35.452681 kubelet[3001]: E0302 13:51:35.451711 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:51:36.403408 sshd[6167]: Connection closed by 10.0.0.1 port 34902 Mar 2 13:51:36.399130 sshd-session[6163]: pam_unix(sshd:session): session closed for user core Mar 2 13:51:36.445546 systemd[1]: sshd@72-10.0.0.75:22-10.0.0.1:34902.service: Deactivated successfully. Mar 2 13:51:36.459923 systemd-logind[1541]: Session 73 logged out. Waiting for processes to exit. Mar 2 13:51:36.470792 systemd[1]: session-73.scope: Deactivated successfully. Mar 2 13:51:36.497499 systemd-logind[1541]: Removed session 73. Mar 2 13:51:39.609738 kubelet[3001]: E0302 13:51:39.607944 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:51:41.437092 systemd[1]: Started sshd@73-10.0.0.75:22-10.0.0.1:42644.service - OpenSSH per-connection server daemon (10.0.0.1:42644). Mar 2 13:51:41.821840 sshd[6182]: Accepted publickey for core from 10.0.0.1 port 42644 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:51:41.828770 sshd-session[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:51:41.895787 systemd-logind[1541]: New session 74 of user core. Mar 2 13:51:41.928820 systemd[1]: Started session-74.scope - Session 74 of User core. Mar 2 13:51:42.610643 kubelet[3001]: E0302 13:51:42.608816 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:51:42.742520 sshd[6186]: Connection closed by 10.0.0.1 port 42644 Mar 2 13:51:42.744243 sshd-session[6182]: pam_unix(sshd:session): session closed for user core Mar 2 13:51:42.763839 systemd[1]: sshd@73-10.0.0.75:22-10.0.0.1:42644.service: Deactivated successfully. Mar 2 13:51:42.774231 systemd[1]: session-74.scope: Deactivated successfully. Mar 2 13:51:42.791095 systemd-logind[1541]: Session 74 logged out. Waiting for processes to exit. Mar 2 13:51:42.811304 systemd-logind[1541]: Removed session 74. Mar 2 13:51:47.821715 systemd[1]: Started sshd@74-10.0.0.75:22-10.0.0.1:42698.service - OpenSSH per-connection server daemon (10.0.0.1:42698). Mar 2 13:51:48.092097 sshd[6199]: Accepted publickey for core from 10.0.0.1 port 42698 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:51:48.099803 sshd-session[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:51:48.155708 systemd-logind[1541]: New session 75 of user core. Mar 2 13:51:48.174519 systemd[1]: Started session-75.scope - Session 75 of User core. Mar 2 13:51:48.757067 sshd[6202]: Connection closed by 10.0.0.1 port 42698 Mar 2 13:51:48.762949 sshd-session[6199]: pam_unix(sshd:session): session closed for user core Mar 2 13:51:48.822846 systemd-logind[1541]: Session 75 logged out. Waiting for processes to exit. Mar 2 13:51:48.838690 systemd[1]: sshd@74-10.0.0.75:22-10.0.0.1:42698.service: Deactivated successfully. Mar 2 13:51:48.899896 systemd[1]: session-75.scope: Deactivated successfully. Mar 2 13:51:48.923169 systemd-logind[1541]: Removed session 75. Mar 2 13:51:53.806841 systemd[1]: Started sshd@75-10.0.0.75:22-10.0.0.1:37570.service - OpenSSH per-connection server daemon (10.0.0.1:37570). Mar 2 13:51:54.114491 sshd[6219]: Accepted publickey for core from 10.0.0.1 port 37570 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:51:54.125803 sshd-session[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:51:54.182702 systemd-logind[1541]: New session 76 of user core. Mar 2 13:51:54.203084 systemd[1]: Started session-76.scope - Session 76 of User core. Mar 2 13:51:54.764478 sshd[6222]: Connection closed by 10.0.0.1 port 37570 Mar 2 13:51:54.766949 sshd-session[6219]: pam_unix(sshd:session): session closed for user core Mar 2 13:51:54.815218 systemd[1]: sshd@75-10.0.0.75:22-10.0.0.1:37570.service: Deactivated successfully. Mar 2 13:51:54.826503 systemd[1]: session-76.scope: Deactivated successfully. Mar 2 13:51:54.849422 systemd-logind[1541]: Session 76 logged out. Waiting for processes to exit. Mar 2 13:51:54.889527 systemd[1]: Started sshd@76-10.0.0.75:22-10.0.0.1:37578.service - OpenSSH per-connection server daemon (10.0.0.1:37578). Mar 2 13:51:54.901724 systemd-logind[1541]: Removed session 76. Mar 2 13:51:55.171232 sshd[6236]: Accepted publickey for core from 10.0.0.1 port 37578 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:51:55.191326 sshd-session[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:51:55.225019 systemd-logind[1541]: New session 77 of user core. Mar 2 13:51:55.240089 systemd[1]: Started session-77.scope - Session 77 of User core. Mar 2 13:52:01.477111 containerd[1564]: time="2026-03-02T13:52:01.472123369Z" level=info msg="StopContainer for \"5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b\" with timeout 30 (s)" Mar 2 13:52:01.489949 containerd[1564]: time="2026-03-02T13:52:01.489813097Z" level=info msg="Stop container \"5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b\" with signal terminated" Mar 2 13:52:01.728922 systemd[1]: cri-containerd-5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b.scope: Deactivated successfully. Mar 2 13:52:01.744188 systemd[1]: cri-containerd-5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b.scope: Consumed 2.671s CPU time, 33M memory peak, 4K written to disk. Mar 2 13:52:01.769232 containerd[1564]: time="2026-03-02T13:52:01.768455184Z" level=info msg="received container exit event container_id:\"5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b\" id:\"5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b\" pid:5675 exited_at:{seconds:1772459521 nanos:741199534}" Mar 2 13:52:01.981914 containerd[1564]: time="2026-03-02T13:52:01.980758433Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:52:02.001823 containerd[1564]: time="2026-03-02T13:52:01.998758092Z" level=info msg="StopContainer for \"8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba\" with timeout 2 (s)" Mar 2 13:52:02.001823 containerd[1564]: time="2026-03-02T13:52:01.999176323Z" level=info msg="Stop container \"8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba\" with signal terminated" Mar 2 13:52:02.251961 systemd-networkd[1454]: lxc_health: Link DOWN Mar 2 13:52:02.251975 systemd-networkd[1454]: lxc_health: Lost carrier Mar 2 13:52:02.367049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b-rootfs.mount: Deactivated successfully. Mar 2 13:52:02.409051 systemd[1]: cri-containerd-8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba.scope: Deactivated successfully. Mar 2 13:52:02.416077 systemd[1]: cri-containerd-8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba.scope: Consumed 49.698s CPU time, 143.8M memory peak, 945K read from disk, 13.3M written to disk. Mar 2 13:52:02.457949 containerd[1564]: time="2026-03-02T13:52:02.457144245Z" level=info msg="received container exit event container_id:\"8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba\" id:\"8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba\" pid:3676 exited_at:{seconds:1772459522 nanos:436488491}" Mar 2 13:52:02.544056 containerd[1564]: time="2026-03-02T13:52:02.543697661Z" level=info msg="StopContainer for \"5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b\" returns successfully" Mar 2 13:52:02.584784 containerd[1564]: time="2026-03-02T13:52:02.584724616Z" level=info msg="StopPodSandbox for \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\"" Mar 2 13:52:02.593251 containerd[1564]: time="2026-03-02T13:52:02.593206995Z" level=info msg="Container to stop \"c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:52:02.597908 containerd[1564]: time="2026-03-02T13:52:02.597877874Z" level=info msg="Container to stop \"5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:52:02.716894 systemd[1]: cri-containerd-d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d.scope: Deactivated successfully. Mar 2 13:52:02.788002 containerd[1564]: time="2026-03-02T13:52:02.787855698Z" level=info msg="received sandbox exit event container_id:\"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" id:\"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" exit_status:137 exited_at:{seconds:1772459522 nanos:780950201}" monitor_name=podsandbox Mar 2 13:52:02.849697 sshd[6239]: Connection closed by 10.0.0.1 port 37578 Mar 2 13:52:02.829721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba-rootfs.mount: Deactivated successfully. Mar 2 13:52:02.855864 sshd-session[6236]: pam_unix(sshd:session): session closed for user core Mar 2 13:52:02.923851 systemd[1]: Started sshd@77-10.0.0.75:22-10.0.0.1:56608.service - OpenSSH per-connection server daemon (10.0.0.1:56608). Mar 2 13:52:02.928407 systemd[1]: sshd@76-10.0.0.75:22-10.0.0.1:37578.service: Deactivated successfully. Mar 2 13:52:02.944188 systemd[1]: session-77.scope: Deactivated successfully. Mar 2 13:52:02.947033 systemd[1]: session-77.scope: Consumed 2.156s CPU time, 30.8M memory peak. Mar 2 13:52:02.959992 systemd-logind[1541]: Session 77 logged out. Waiting for processes to exit. Mar 2 13:52:03.019008 systemd-logind[1541]: Removed session 77. Mar 2 13:52:03.081951 containerd[1564]: time="2026-03-02T13:52:03.080819428Z" level=info msg="StopContainer for \"8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba\" returns successfully" Mar 2 13:52:03.088840 containerd[1564]: time="2026-03-02T13:52:03.088804070Z" level=info msg="StopPodSandbox for \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\"" Mar 2 13:52:03.089126 containerd[1564]: time="2026-03-02T13:52:03.089099459Z" level=info msg="Container to stop \"6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:52:03.089246 containerd[1564]: time="2026-03-02T13:52:03.089222659Z" level=info msg="Container to stop \"8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:52:03.092838 containerd[1564]: time="2026-03-02T13:52:03.092809700Z" level=info msg="Container to stop \"10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:52:03.092935 containerd[1564]: time="2026-03-02T13:52:03.092912362Z" level=info msg="Container to stop \"d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:52:03.093031 containerd[1564]: time="2026-03-02T13:52:03.093011386Z" level=info msg="Container to stop \"0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:52:03.226193 systemd[1]: cri-containerd-322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90.scope: Deactivated successfully. Mar 2 13:52:03.246282 containerd[1564]: time="2026-03-02T13:52:03.246230350Z" level=info msg="received sandbox exit event container_id:\"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" id:\"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" exit_status:137 exited_at:{seconds:1772459523 nanos:245523734}" monitor_name=podsandbox Mar 2 13:52:03.254289 kubelet[3001]: I0302 13:52:03.251746 3001 scope.go:117] "RemoveContainer" containerID="c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8" Mar 2 13:52:03.318278 containerd[1564]: time="2026-03-02T13:52:03.318227377Z" level=info msg="RemoveContainer for \"c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8\"" Mar 2 13:52:03.323087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d-rootfs.mount: Deactivated successfully. Mar 2 13:52:03.562886 containerd[1564]: time="2026-03-02T13:52:03.559119193Z" level=info msg="shim disconnected" id=d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d namespace=k8s.io Mar 2 13:52:03.563917 containerd[1564]: time="2026-03-02T13:52:03.563763975Z" level=warning msg="cleaning up after shim disconnected" id=d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d namespace=k8s.io Mar 2 13:52:03.582796 containerd[1564]: time="2026-03-02T13:52:03.580547041Z" level=info msg="RemoveContainer for \"c1cf40b03b656daac2f1714153503344c9a24993e6bb994051ca21f9f0dd78b8\" returns successfully" Mar 2 13:52:03.617538 containerd[1564]: time="2026-03-02T13:52:03.582299665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:52:03.664793 sshd[6326]: Accepted publickey for core from 10.0.0.1 port 56608 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:52:03.679128 sshd-session[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:52:03.719265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90-rootfs.mount: Deactivated successfully. Mar 2 13:52:03.746467 containerd[1564]: time="2026-03-02T13:52:03.746421859Z" level=info msg="shim disconnected" id=322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90 namespace=k8s.io Mar 2 13:52:03.747170 containerd[1564]: time="2026-03-02T13:52:03.746833135Z" level=warning msg="cleaning up after shim disconnected" id=322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90 namespace=k8s.io Mar 2 13:52:03.747170 containerd[1564]: time="2026-03-02T13:52:03.746860667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:52:03.815673 systemd-logind[1541]: New session 78 of user core. Mar 2 13:52:03.856007 systemd[1]: Started session-78.scope - Session 78 of User core. Mar 2 13:52:04.063142 containerd[1564]: time="2026-03-02T13:52:04.063087627Z" level=info msg="TearDown network for sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" successfully" Mar 2 13:52:04.069936 containerd[1564]: time="2026-03-02T13:52:04.069814217Z" level=info msg="StopPodSandbox for \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" returns successfully" Mar 2 13:52:04.091770 containerd[1564]: time="2026-03-02T13:52:04.091721118Z" level=info msg="TearDown network for sandbox \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" successfully" Mar 2 13:52:04.091929 containerd[1564]: time="2026-03-02T13:52:04.091907395Z" level=info msg="StopPodSandbox for \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" returns successfully" Mar 2 13:52:04.093084 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90-shm.mount: Deactivated successfully. Mar 2 13:52:04.096229 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d-shm.mount: Deactivated successfully. Mar 2 13:52:04.139701 containerd[1564]: time="2026-03-02T13:52:04.136917638Z" level=info msg="received sandbox container exit event sandbox_id:\"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" exit_status:137 exited_at:{seconds:1772459523 nanos:245523734}" monitor_name=criService Mar 2 13:52:04.139701 containerd[1564]: time="2026-03-02T13:52:04.138525539Z" level=info msg="received sandbox container exit event sandbox_id:\"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" exit_status:137 exited_at:{seconds:1772459522 nanos:780950201}" monitor_name=criService Mar 2 13:52:04.329789 kubelet[3001]: I0302 13:52:04.303545 3001 scope.go:117] "RemoveContainer" containerID="5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b" Mar 2 13:52:04.414897 kubelet[3001]: I0302 13:52:04.403964 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg6x2\" (UniqueName: \"kubernetes.io/projected/75e718b7-73eb-4c96-86ba-b3f5c425bc53-kube-api-access-vg6x2\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.414897 kubelet[3001]: I0302 13:52:04.404257 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-host-proc-sys-net\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.414897 kubelet[3001]: I0302 13:52:04.404288 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75e718b7-73eb-4c96-86ba-b3f5c425bc53-clustermesh-secrets\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.414897 kubelet[3001]: I0302 13:52:04.404407 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75e718b7-73eb-4c96-86ba-b3f5c425bc53-hubble-tls\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.414897 kubelet[3001]: I0302 13:52:04.404441 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cni-path\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.414897 kubelet[3001]: I0302 13:52:04.404465 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cilium-config-path\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.415441 kubelet[3001]: I0302 13:52:04.404492 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-hostproc\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.415441 kubelet[3001]: I0302 13:52:04.404511 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-etc-cni-netd\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.415441 kubelet[3001]: I0302 13:52:04.404536 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819-cilium-config-path\") pod \"0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819\" (UID: \"0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819\") " Mar 2 13:52:04.415441 kubelet[3001]: I0302 13:52:04.404721 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-lib-modules\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.415441 kubelet[3001]: I0302 13:52:04.404745 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-xtables-lock\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.415441 kubelet[3001]: I0302 13:52:04.404765 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-bpf-maps\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.415909 kubelet[3001]: I0302 13:52:04.404787 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cilium-cgroup\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.415909 kubelet[3001]: I0302 13:52:04.404813 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cilium-run\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.415909 kubelet[3001]: I0302 13:52:04.404834 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-host-proc-sys-kernel\") pod \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\" (UID: \"75e718b7-73eb-4c96-86ba-b3f5c425bc53\") " Mar 2 13:52:04.415909 kubelet[3001]: I0302 13:52:04.404857 3001 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4wlz\" (UniqueName: \"kubernetes.io/projected/0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819-kube-api-access-w4wlz\") pod \"0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819\" (UID: \"0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819\") " Mar 2 13:52:04.415909 kubelet[3001]: I0302 13:52:04.404876 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cni-path" (OuterVolumeSpecName: "cni-path") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:52:04.415909 kubelet[3001]: I0302 13:52:04.405016 3001 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.416982 containerd[1564]: time="2026-03-02T13:52:04.416945168Z" level=info msg="RemoveContainer for \"5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b\"" Mar 2 13:52:04.436950 kubelet[3001]: I0302 13:52:04.433993 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:52:04.439016 kubelet[3001]: I0302 13:52:04.438978 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-hostproc" (OuterVolumeSpecName: "hostproc") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:52:04.439197 kubelet[3001]: I0302 13:52:04.439169 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:52:04.455734 kubelet[3001]: I0302 13:52:04.444896 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:52:04.455734 kubelet[3001]: I0302 13:52:04.449878 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:52:04.455734 kubelet[3001]: I0302 13:52:04.450021 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:52:04.455734 kubelet[3001]: I0302 13:52:04.450057 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:52:04.455734 kubelet[3001]: I0302 13:52:04.450084 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:52:04.456019 kubelet[3001]: I0302 13:52:04.451247 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:52:04.488851 systemd[1]: var-lib-kubelet-pods-75e718b7\x2d73eb\x2d4c96\x2d86ba\x2db3f5c425bc53-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvg6x2.mount: Deactivated successfully. Mar 2 13:52:04.494904 systemd[1]: var-lib-kubelet-pods-75e718b7\x2d73eb\x2d4c96\x2d86ba\x2db3f5c425bc53-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 2 13:52:04.511777 containerd[1564]: time="2026-03-02T13:52:04.510011272Z" level=info msg="RemoveContainer for \"5ee6e8e0cc529c190207d74610808ead9574a8eda58a564b8b08e8ef5327352b\" returns successfully" Mar 2 13:52:04.515950 kubelet[3001]: I0302 13:52:04.515791 3001 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.515950 kubelet[3001]: I0302 13:52:04.515830 3001 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.515950 kubelet[3001]: I0302 13:52:04.515844 3001 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.515950 kubelet[3001]: I0302 13:52:04.515857 3001 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.515950 kubelet[3001]: I0302 13:52:04.515870 3001 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.515950 kubelet[3001]: I0302 13:52:04.515885 3001 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.515950 kubelet[3001]: I0302 13:52:04.515898 3001 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.515950 kubelet[3001]: I0302 13:52:04.515910 3001 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.516398 kubelet[3001]: I0302 13:52:04.515921 3001 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75e718b7-73eb-4c96-86ba-b3f5c425bc53-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.526172 kubelet[3001]: I0302 13:52:04.526106 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819" (UID: "0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:52:04.526404 kubelet[3001]: I0302 13:52:04.526290 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:52:04.533761 systemd[1]: var-lib-kubelet-pods-75e718b7\x2d73eb\x2d4c96\x2d86ba\x2db3f5c425bc53-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 2 13:52:04.535185 kubelet[3001]: I0302 13:52:04.535144 3001 scope.go:117] "RemoveContainer" containerID="8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba" Mar 2 13:52:04.537788 kubelet[3001]: I0302 13:52:04.537758 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e718b7-73eb-4c96-86ba-b3f5c425bc53-kube-api-access-vg6x2" (OuterVolumeSpecName: "kube-api-access-vg6x2") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "kube-api-access-vg6x2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:52:04.554277 kubelet[3001]: I0302 13:52:04.553886 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e718b7-73eb-4c96-86ba-b3f5c425bc53-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:52:04.554277 kubelet[3001]: I0302 13:52:04.554137 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819-kube-api-access-w4wlz" (OuterVolumeSpecName: "kube-api-access-w4wlz") pod "0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819" (UID: "0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819"). InnerVolumeSpecName "kube-api-access-w4wlz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:52:04.557199 kubelet[3001]: I0302 13:52:04.555492 3001 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e718b7-73eb-4c96-86ba-b3f5c425bc53-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "75e718b7-73eb-4c96-86ba-b3f5c425bc53" (UID: "75e718b7-73eb-4c96-86ba-b3f5c425bc53"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:52:04.566524 containerd[1564]: time="2026-03-02T13:52:04.562814270Z" level=info msg="RemoveContainer for \"8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba\"" Mar 2 13:52:04.617727 kubelet[3001]: E0302 13:52:04.612854 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:04.617875 kubelet[3001]: I0302 13:52:04.617772 3001 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4wlz\" (UniqueName: \"kubernetes.io/projected/0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819-kube-api-access-w4wlz\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.617875 kubelet[3001]: I0302 13:52:04.617802 3001 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vg6x2\" (UniqueName: \"kubernetes.io/projected/75e718b7-73eb-4c96-86ba-b3f5c425bc53-kube-api-access-vg6x2\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.617875 kubelet[3001]: I0302 13:52:04.617820 3001 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75e718b7-73eb-4c96-86ba-b3f5c425bc53-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.617875 kubelet[3001]: I0302 13:52:04.617831 3001 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75e718b7-73eb-4c96-86ba-b3f5c425bc53-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.617875 kubelet[3001]: I0302 13:52:04.617844 3001 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75e718b7-73eb-4c96-86ba-b3f5c425bc53-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.617875 kubelet[3001]: I0302 13:52:04.617855 3001 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:52:04.629155 containerd[1564]: time="2026-03-02T13:52:04.618279986Z" level=info msg="RemoveContainer for \"8cb72034ed7c68f800d1fc6a2ea113862e1972c35b3f4d42de665672d35456ba\" returns successfully" Mar 2 13:52:04.640048 kubelet[3001]: I0302 13:52:04.640012 3001 scope.go:117] "RemoveContainer" containerID="0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c" Mar 2 13:52:04.671274 systemd[1]: Removed slice kubepods-besteffort-pod0bc16fa9_9b8c_49ca_9fa7_89c2e1c8a819.slice - libcontainer container kubepods-besteffort-pod0bc16fa9_9b8c_49ca_9fa7_89c2e1c8a819.slice. Mar 2 13:52:04.674845 systemd[1]: kubepods-besteffort-pod0bc16fa9_9b8c_49ca_9fa7_89c2e1c8a819.slice: Consumed 9.949s CPU time, 33.3M memory peak, 8K written to disk. Mar 2 13:52:04.701511 systemd[1]: var-lib-kubelet-pods-0bc16fa9\x2d9b8c\x2d49ca\x2d9fa7\x2d89c2e1c8a819-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw4wlz.mount: Deactivated successfully. Mar 2 13:52:04.746206 containerd[1564]: time="2026-03-02T13:52:04.685894423Z" level=info msg="RemoveContainer for \"0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c\"" Mar 2 13:52:04.735753 systemd[1]: Removed slice kubepods-burstable-pod75e718b7_73eb_4c96_86ba_b3f5c425bc53.slice - libcontainer container kubepods-burstable-pod75e718b7_73eb_4c96_86ba_b3f5c425bc53.slice. Mar 2 13:52:04.735887 systemd[1]: kubepods-burstable-pod75e718b7_73eb_4c96_86ba_b3f5c425bc53.slice: Consumed 50.575s CPU time, 144.1M memory peak, 1M read from disk, 13.3M written to disk. Mar 2 13:52:04.804274 containerd[1564]: time="2026-03-02T13:52:04.804221370Z" level=info msg="RemoveContainer for \"0ba3994a20fd8f3f7c2ad5c1f76aecd7cbe065a825270cc8050af7b31387618c\" returns successfully" Mar 2 13:52:04.826978 kubelet[3001]: I0302 13:52:04.825052 3001 scope.go:117] "RemoveContainer" containerID="6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab" Mar 2 13:52:04.892055 containerd[1564]: time="2026-03-02T13:52:04.890204493Z" level=info msg="RemoveContainer for \"6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab\"" Mar 2 13:52:04.968226 containerd[1564]: time="2026-03-02T13:52:04.968173393Z" level=info msg="RemoveContainer for \"6ae4667fa0454554a3a5ca23779a29c944fa0514c54194651000b84005860bab\" returns successfully" Mar 2 13:52:04.979863 kubelet[3001]: I0302 13:52:04.979819 3001 scope.go:117] "RemoveContainer" containerID="10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80" Mar 2 13:52:05.009102 containerd[1564]: time="2026-03-02T13:52:05.007744154Z" level=info msg="RemoveContainer for \"10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80\"" Mar 2 13:52:05.060244 containerd[1564]: time="2026-03-02T13:52:05.060185337Z" level=info msg="RemoveContainer for \"10d86e9395684fdb55e7c5761d6d80d0f394455e1f539e007723bec9ad3c4a80\" returns successfully" Mar 2 13:52:05.061162 kubelet[3001]: I0302 13:52:05.061133 3001 scope.go:117] "RemoveContainer" containerID="d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9" Mar 2 13:52:05.134717 containerd[1564]: time="2026-03-02T13:52:05.133083355Z" level=info msg="RemoveContainer for \"d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9\"" Mar 2 13:52:05.158741 containerd[1564]: time="2026-03-02T13:52:05.158167883Z" level=info msg="RemoveContainer for \"d3a0abf49e1b34b3cdb214d7f8056e4d87a139142a111d2888c616377d14a4e9\" returns successfully" Mar 2 13:52:05.450276 kubelet[3001]: E0302 13:52:05.448932 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:52:06.626200 kubelet[3001]: I0302 13:52:06.626151 3001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819" path="/var/lib/kubelet/pods/0bc16fa9-9b8c-49ca-9fa7-89c2e1c8a819/volumes" Mar 2 13:52:06.638783 kubelet[3001]: I0302 13:52:06.638460 3001 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75e718b7-73eb-4c96-86ba-b3f5c425bc53" path="/var/lib/kubelet/pods/75e718b7-73eb-4c96-86ba-b3f5c425bc53/volumes" Mar 2 13:52:06.904017 sshd[6388]: Connection closed by 10.0.0.1 port 56608 Mar 2 13:52:06.909428 sshd-session[6326]: pam_unix(sshd:session): session closed for user core Mar 2 13:52:06.987169 systemd[1]: sshd@77-10.0.0.75:22-10.0.0.1:56608.service: Deactivated successfully. Mar 2 13:52:07.001296 systemd[1]: session-78.scope: Deactivated successfully. Mar 2 13:52:07.025693 systemd-logind[1541]: Session 78 logged out. Waiting for processes to exit. Mar 2 13:52:07.036982 systemd[1]: Started sshd@78-10.0.0.75:22-10.0.0.1:56648.service - OpenSSH per-connection server daemon (10.0.0.1:56648). Mar 2 13:52:07.064851 systemd-logind[1541]: Removed session 78. Mar 2 13:52:07.518748 systemd[1]: Created slice kubepods-burstable-pod2626fad6_249e_4d51_a771_6c19aaeec443.slice - libcontainer container kubepods-burstable-pod2626fad6_249e_4d51_a771_6c19aaeec443.slice. Mar 2 13:52:07.523060 sshd[6405]: Accepted publickey for core from 10.0.0.1 port 56648 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:52:07.529227 sshd-session[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:52:07.552717 kubelet[3001]: I0302 13:52:07.551948 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2626fad6-249e-4d51-a771-6c19aaeec443-cni-path\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.552717 kubelet[3001]: I0302 13:52:07.552032 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2626fad6-249e-4d51-a771-6c19aaeec443-xtables-lock\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.552717 kubelet[3001]: I0302 13:52:07.552063 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2626fad6-249e-4d51-a771-6c19aaeec443-cilium-config-path\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.552717 kubelet[3001]: I0302 13:52:07.552089 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7gn5\" (UniqueName: \"kubernetes.io/projected/2626fad6-249e-4d51-a771-6c19aaeec443-kube-api-access-n7gn5\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.552717 kubelet[3001]: I0302 13:52:07.552113 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2626fad6-249e-4d51-a771-6c19aaeec443-bpf-maps\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.552717 kubelet[3001]: I0302 13:52:07.552133 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2626fad6-249e-4d51-a771-6c19aaeec443-hostproc\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.553067 kubelet[3001]: I0302 13:52:07.552158 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2626fad6-249e-4d51-a771-6c19aaeec443-cilium-cgroup\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.553067 kubelet[3001]: I0302 13:52:07.552179 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2626fad6-249e-4d51-a771-6c19aaeec443-host-proc-sys-kernel\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.553067 kubelet[3001]: I0302 13:52:07.552211 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2626fad6-249e-4d51-a771-6c19aaeec443-clustermesh-secrets\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.553067 kubelet[3001]: I0302 13:52:07.552232 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2626fad6-249e-4d51-a771-6c19aaeec443-etc-cni-netd\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.553067 kubelet[3001]: I0302 13:52:07.552258 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2626fad6-249e-4d51-a771-6c19aaeec443-lib-modules\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.553067 kubelet[3001]: I0302 13:52:07.552281 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2626fad6-249e-4d51-a771-6c19aaeec443-hubble-tls\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.553484 kubelet[3001]: I0302 13:52:07.552301 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2626fad6-249e-4d51-a771-6c19aaeec443-cilium-run\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.553484 kubelet[3001]: I0302 13:52:07.552448 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2626fad6-249e-4d51-a771-6c19aaeec443-cilium-ipsec-secrets\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.553484 kubelet[3001]: I0302 13:52:07.552472 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2626fad6-249e-4d51-a771-6c19aaeec443-host-proc-sys-net\") pod \"cilium-25ss2\" (UID: \"2626fad6-249e-4d51-a771-6c19aaeec443\") " pod="kube-system/cilium-25ss2" Mar 2 13:52:07.594206 systemd-logind[1541]: New session 79 of user core. Mar 2 13:52:07.623971 systemd[1]: Started session-79.scope - Session 79 of User core. Mar 2 13:52:07.761723 sshd[6408]: Connection closed by 10.0.0.1 port 56648 Mar 2 13:52:07.768276 sshd-session[6405]: pam_unix(sshd:session): session closed for user core Mar 2 13:52:07.908203 systemd[1]: sshd@78-10.0.0.75:22-10.0.0.1:56648.service: Deactivated successfully. Mar 2 13:52:07.921932 systemd[1]: session-79.scope: Deactivated successfully. Mar 2 13:52:07.935541 systemd-logind[1541]: Session 79 logged out. Waiting for processes to exit. Mar 2 13:52:07.948492 systemd[1]: Started sshd@79-10.0.0.75:22-10.0.0.1:56650.service - OpenSSH per-connection server daemon (10.0.0.1:56650). Mar 2 13:52:08.005162 systemd-logind[1541]: Removed session 79. Mar 2 13:52:08.144738 kubelet[3001]: E0302 13:52:08.144275 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:08.149710 containerd[1564]: time="2026-03-02T13:52:08.149028791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25ss2,Uid:2626fad6-249e-4d51-a771-6c19aaeec443,Namespace:kube-system,Attempt:0,}" Mar 2 13:52:08.270901 sshd[6419]: Accepted publickey for core from 10.0.0.1 port 56650 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:52:08.290207 sshd-session[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:52:08.342965 systemd-logind[1541]: New session 80 of user core. Mar 2 13:52:08.404144 containerd[1564]: time="2026-03-02T13:52:08.401253693Z" level=info msg="connecting to shim 17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806" address="unix:///run/containerd/s/76a7938a4364035fc5618a203102c3a853ec3f459a4009ade06c3f224f333320" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:52:08.417804 systemd[1]: Started session-80.scope - Session 80 of User core. Mar 2 13:52:09.069194 systemd[1]: Started cri-containerd-17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806.scope - libcontainer container 17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806. Mar 2 13:52:09.954534 containerd[1564]: time="2026-03-02T13:52:09.954291313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25ss2,Uid:2626fad6-249e-4d51-a771-6c19aaeec443,Namespace:kube-system,Attempt:0,} returns sandbox id \"17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806\"" Mar 2 13:52:09.984931 kubelet[3001]: E0302 13:52:09.983082 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:10.048652 containerd[1564]: time="2026-03-02T13:52:10.045985517Z" level=info msg="CreateContainer within sandbox \"17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:52:10.293998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount179029751.mount: Deactivated successfully. Mar 2 13:52:10.303994 containerd[1564]: time="2026-03-02T13:52:10.303940102Z" level=info msg="Container 092f9363a08c7370576dd9ab8c5ea09fef5583f2182a4f18029971d5148a3b10: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:52:10.531548 kubelet[3001]: E0302 13:52:10.531414 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:52:10.580199 containerd[1564]: time="2026-03-02T13:52:10.562263195Z" level=info msg="CreateContainer within sandbox \"17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"092f9363a08c7370576dd9ab8c5ea09fef5583f2182a4f18029971d5148a3b10\"" Mar 2 13:52:10.604162 containerd[1564]: time="2026-03-02T13:52:10.590257488Z" level=info msg="StartContainer for \"092f9363a08c7370576dd9ab8c5ea09fef5583f2182a4f18029971d5148a3b10\"" Mar 2 13:52:10.611275 containerd[1564]: time="2026-03-02T13:52:10.611222377Z" level=info msg="connecting to shim 092f9363a08c7370576dd9ab8c5ea09fef5583f2182a4f18029971d5148a3b10" address="unix:///run/containerd/s/76a7938a4364035fc5618a203102c3a853ec3f459a4009ade06c3f224f333320" protocol=ttrpc version=3 Mar 2 13:52:10.840779 systemd[1]: Started cri-containerd-092f9363a08c7370576dd9ab8c5ea09fef5583f2182a4f18029971d5148a3b10.scope - libcontainer container 092f9363a08c7370576dd9ab8c5ea09fef5583f2182a4f18029971d5148a3b10. Mar 2 13:52:11.359780 containerd[1564]: time="2026-03-02T13:52:11.359205283Z" level=info msg="StartContainer for \"092f9363a08c7370576dd9ab8c5ea09fef5583f2182a4f18029971d5148a3b10\" returns successfully" Mar 2 13:52:11.557289 systemd[1]: cri-containerd-092f9363a08c7370576dd9ab8c5ea09fef5583f2182a4f18029971d5148a3b10.scope: Deactivated successfully. Mar 2 13:52:11.566468 containerd[1564]: time="2026-03-02T13:52:11.563787985Z" level=info msg="received container exit event container_id:\"092f9363a08c7370576dd9ab8c5ea09fef5583f2182a4f18029971d5148a3b10\" id:\"092f9363a08c7370576dd9ab8c5ea09fef5583f2182a4f18029971d5148a3b10\" pid:6487 exited_at:{seconds:1772459531 nanos:561278391}" Mar 2 13:52:11.749812 kubelet[3001]: I0302 13:52:11.749002 3001 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-02T13:52:11Z","lastTransitionTime":"2026-03-02T13:52:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 2 13:52:11.834895 kubelet[3001]: E0302 13:52:11.834850 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:12.197143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-092f9363a08c7370576dd9ab8c5ea09fef5583f2182a4f18029971d5148a3b10-rootfs.mount: Deactivated successfully. Mar 2 13:52:12.895271 kubelet[3001]: E0302 13:52:12.894912 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:13.043893 containerd[1564]: time="2026-03-02T13:52:13.043516993Z" level=info msg="CreateContainer within sandbox \"17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:52:13.221399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3950552485.mount: Deactivated successfully. Mar 2 13:52:13.281731 containerd[1564]: time="2026-03-02T13:52:13.280837539Z" level=info msg="Container 8fe40de34d9d6812a33390bc4abe5c39979031d14144261cded9a8dedb5088f8: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:52:13.301909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432098103.mount: Deactivated successfully. Mar 2 13:52:13.394531 containerd[1564]: time="2026-03-02T13:52:13.390259608Z" level=info msg="CreateContainer within sandbox \"17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8fe40de34d9d6812a33390bc4abe5c39979031d14144261cded9a8dedb5088f8\"" Mar 2 13:52:13.413424 containerd[1564]: time="2026-03-02T13:52:13.413268386Z" level=info msg="StartContainer for \"8fe40de34d9d6812a33390bc4abe5c39979031d14144261cded9a8dedb5088f8\"" Mar 2 13:52:13.506535 containerd[1564]: time="2026-03-02T13:52:13.506270869Z" level=info msg="connecting to shim 8fe40de34d9d6812a33390bc4abe5c39979031d14144261cded9a8dedb5088f8" address="unix:///run/containerd/s/76a7938a4364035fc5618a203102c3a853ec3f459a4009ade06c3f224f333320" protocol=ttrpc version=3 Mar 2 13:52:13.828461 systemd[1]: Started cri-containerd-8fe40de34d9d6812a33390bc4abe5c39979031d14144261cded9a8dedb5088f8.scope - libcontainer container 8fe40de34d9d6812a33390bc4abe5c39979031d14144261cded9a8dedb5088f8. Mar 2 13:52:14.382120 containerd[1564]: time="2026-03-02T13:52:14.382062820Z" level=info msg="StartContainer for \"8fe40de34d9d6812a33390bc4abe5c39979031d14144261cded9a8dedb5088f8\" returns successfully" Mar 2 13:52:14.446829 systemd[1]: cri-containerd-8fe40de34d9d6812a33390bc4abe5c39979031d14144261cded9a8dedb5088f8.scope: Deactivated successfully. Mar 2 13:52:14.465048 containerd[1564]: time="2026-03-02T13:52:14.461099812Z" level=info msg="received container exit event container_id:\"8fe40de34d9d6812a33390bc4abe5c39979031d14144261cded9a8dedb5088f8\" id:\"8fe40de34d9d6812a33390bc4abe5c39979031d14144261cded9a8dedb5088f8\" pid:6533 exited_at:{seconds:1772459534 nanos:452094860}" Mar 2 13:52:14.821994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fe40de34d9d6812a33390bc4abe5c39979031d14144261cded9a8dedb5088f8-rootfs.mount: Deactivated successfully. Mar 2 13:52:15.024878 kubelet[3001]: E0302 13:52:15.014859 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:15.109457 containerd[1564]: time="2026-03-02T13:52:15.093753232Z" level=info msg="CreateContainer within sandbox \"17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:52:15.562419 kubelet[3001]: E0302 13:52:15.554851 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:52:15.619440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1627194317.mount: Deactivated successfully. Mar 2 13:52:15.636811 containerd[1564]: time="2026-03-02T13:52:15.634984331Z" level=info msg="Container afd6f1252ed102f52f522c7d23948fcabf29b9382fd7e787d6ef08c62c7615ec: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:52:15.796743 containerd[1564]: time="2026-03-02T13:52:15.796287962Z" level=info msg="CreateContainer within sandbox \"17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"afd6f1252ed102f52f522c7d23948fcabf29b9382fd7e787d6ef08c62c7615ec\"" Mar 2 13:52:15.802303 containerd[1564]: time="2026-03-02T13:52:15.802259144Z" level=info msg="StartContainer for \"afd6f1252ed102f52f522c7d23948fcabf29b9382fd7e787d6ef08c62c7615ec\"" Mar 2 13:52:15.866962 containerd[1564]: time="2026-03-02T13:52:15.866419607Z" level=info msg="connecting to shim afd6f1252ed102f52f522c7d23948fcabf29b9382fd7e787d6ef08c62c7615ec" address="unix:///run/containerd/s/76a7938a4364035fc5618a203102c3a853ec3f459a4009ade06c3f224f333320" protocol=ttrpc version=3 Mar 2 13:52:16.145217 systemd[1]: Started cri-containerd-afd6f1252ed102f52f522c7d23948fcabf29b9382fd7e787d6ef08c62c7615ec.scope - libcontainer container afd6f1252ed102f52f522c7d23948fcabf29b9382fd7e787d6ef08c62c7615ec. Mar 2 13:52:17.139766 containerd[1564]: time="2026-03-02T13:52:17.137231033Z" level=info msg="StartContainer for \"afd6f1252ed102f52f522c7d23948fcabf29b9382fd7e787d6ef08c62c7615ec\" returns successfully" Mar 2 13:52:17.211146 systemd[1]: cri-containerd-afd6f1252ed102f52f522c7d23948fcabf29b9382fd7e787d6ef08c62c7615ec.scope: Deactivated successfully. Mar 2 13:52:17.218083 containerd[1564]: time="2026-03-02T13:52:17.216902045Z" level=info msg="received container exit event container_id:\"afd6f1252ed102f52f522c7d23948fcabf29b9382fd7e787d6ef08c62c7615ec\" id:\"afd6f1252ed102f52f522c7d23948fcabf29b9382fd7e787d6ef08c62c7615ec\" pid:6578 exited_at:{seconds:1772459537 nanos:214027139}" Mar 2 13:52:17.488226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afd6f1252ed102f52f522c7d23948fcabf29b9382fd7e787d6ef08c62c7615ec-rootfs.mount: Deactivated successfully. Mar 2 13:52:18.326193 kubelet[3001]: E0302 13:52:18.319786 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:18.415972 containerd[1564]: time="2026-03-02T13:52:18.397715399Z" level=info msg="CreateContainer within sandbox \"17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:52:18.567524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248173596.mount: Deactivated successfully. Mar 2 13:52:18.643745 containerd[1564]: time="2026-03-02T13:52:18.643522088Z" level=info msg="Container 5a0a566e6ba084027e349e3db72ba55ef50d72a43d1610bf56b8bb25aa2b1a8d: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:52:18.735527 containerd[1564]: time="2026-03-02T13:52:18.728257812Z" level=info msg="CreateContainer within sandbox \"17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5a0a566e6ba084027e349e3db72ba55ef50d72a43d1610bf56b8bb25aa2b1a8d\"" Mar 2 13:52:18.752061 containerd[1564]: time="2026-03-02T13:52:18.752008911Z" level=info msg="StartContainer for \"5a0a566e6ba084027e349e3db72ba55ef50d72a43d1610bf56b8bb25aa2b1a8d\"" Mar 2 13:52:18.761052 containerd[1564]: time="2026-03-02T13:52:18.760958621Z" level=info msg="connecting to shim 5a0a566e6ba084027e349e3db72ba55ef50d72a43d1610bf56b8bb25aa2b1a8d" address="unix:///run/containerd/s/76a7938a4364035fc5618a203102c3a853ec3f459a4009ade06c3f224f333320" protocol=ttrpc version=3 Mar 2 13:52:19.117203 systemd[1]: Started cri-containerd-5a0a566e6ba084027e349e3db72ba55ef50d72a43d1610bf56b8bb25aa2b1a8d.scope - libcontainer container 5a0a566e6ba084027e349e3db72ba55ef50d72a43d1610bf56b8bb25aa2b1a8d. Mar 2 13:52:19.719241 systemd[1]: cri-containerd-5a0a566e6ba084027e349e3db72ba55ef50d72a43d1610bf56b8bb25aa2b1a8d.scope: Deactivated successfully. Mar 2 13:52:19.738199 containerd[1564]: time="2026-03-02T13:52:19.738070636Z" level=info msg="received container exit event container_id:\"5a0a566e6ba084027e349e3db72ba55ef50d72a43d1610bf56b8bb25aa2b1a8d\" id:\"5a0a566e6ba084027e349e3db72ba55ef50d72a43d1610bf56b8bb25aa2b1a8d\" pid:6617 exited_at:{seconds:1772459539 nanos:727967770}" Mar 2 13:52:19.751884 containerd[1564]: time="2026-03-02T13:52:19.750895223Z" level=info msg="StartContainer for \"5a0a566e6ba084027e349e3db72ba55ef50d72a43d1610bf56b8bb25aa2b1a8d\" returns successfully" Mar 2 13:52:20.308951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a0a566e6ba084027e349e3db72ba55ef50d72a43d1610bf56b8bb25aa2b1a8d-rootfs.mount: Deactivated successfully. Mar 2 13:52:20.592136 kubelet[3001]: E0302 13:52:20.589883 3001 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:52:20.614729 kubelet[3001]: E0302 13:52:20.614514 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:20.723805 kubelet[3001]: E0302 13:52:20.722824 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:20.729878 containerd[1564]: time="2026-03-02T13:52:20.729144219Z" level=info msg="CreateContainer within sandbox \"17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:52:20.989755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1148493825.mount: Deactivated successfully. Mar 2 13:52:21.037944 containerd[1564]: time="2026-03-02T13:52:21.037889664Z" level=info msg="Container dd1fc7e3ab9cc3982f562488e7b65b13068ba0e72f79d6549d0a51a378beffb4: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:52:21.039879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2986081758.mount: Deactivated successfully. Mar 2 13:52:21.110548 containerd[1564]: time="2026-03-02T13:52:21.110493752Z" level=info msg="CreateContainer within sandbox \"17fb97d2eb1ab0411c5b03da7a5e4206046465387615d2b40d8155085fdca806\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dd1fc7e3ab9cc3982f562488e7b65b13068ba0e72f79d6549d0a51a378beffb4\"" Mar 2 13:52:21.126480 containerd[1564]: time="2026-03-02T13:52:21.126308519Z" level=info msg="StartContainer for \"dd1fc7e3ab9cc3982f562488e7b65b13068ba0e72f79d6549d0a51a378beffb4\"" Mar 2 13:52:21.134460 containerd[1564]: time="2026-03-02T13:52:21.134257705Z" level=info msg="connecting to shim dd1fc7e3ab9cc3982f562488e7b65b13068ba0e72f79d6549d0a51a378beffb4" address="unix:///run/containerd/s/76a7938a4364035fc5618a203102c3a853ec3f459a4009ade06c3f224f333320" protocol=ttrpc version=3 Mar 2 13:52:21.414813 systemd[1]: Started cri-containerd-dd1fc7e3ab9cc3982f562488e7b65b13068ba0e72f79d6549d0a51a378beffb4.scope - libcontainer container dd1fc7e3ab9cc3982f562488e7b65b13068ba0e72f79d6549d0a51a378beffb4. Mar 2 13:52:22.160219 containerd[1564]: time="2026-03-02T13:52:22.160173089Z" level=info msg="StartContainer for \"dd1fc7e3ab9cc3982f562488e7b65b13068ba0e72f79d6549d0a51a378beffb4\" returns successfully" Mar 2 13:52:24.025155 kubelet[3001]: E0302 13:52:24.016835 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:24.173733 kubelet[3001]: I0302 13:52:24.166261 3001 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-25ss2" podStartSLOduration=17.166242528 podStartE2EDuration="17.166242528s" podCreationTimestamp="2026-03-02 13:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:52:24.13525512 +0000 UTC m=+1021.189937350" watchObservedRunningTime="2026-03-02 13:52:24.166242528 +0000 UTC m=+1021.220924718" Mar 2 13:52:25.029707 kubelet[3001]: E0302 13:52:25.029222 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:26.606851 containerd[1564]: time="2026-03-02T13:52:26.596158029Z" level=info msg="StopPodSandbox for \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\"" Mar 2 13:52:26.606851 containerd[1564]: time="2026-03-02T13:52:26.596527968Z" level=info msg="TearDown network for sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" successfully" Mar 2 13:52:26.606851 containerd[1564]: time="2026-03-02T13:52:26.596719184Z" level=info msg="StopPodSandbox for \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" returns successfully" Mar 2 13:52:26.606851 containerd[1564]: time="2026-03-02T13:52:26.598312411Z" level=info msg="RemovePodSandbox for \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\"" Mar 2 13:52:26.606851 containerd[1564]: time="2026-03-02T13:52:26.603956444Z" level=info msg="Forcibly stopping sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\"" Mar 2 13:52:26.606851 containerd[1564]: time="2026-03-02T13:52:26.604110100Z" level=info msg="TearDown network for sandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" successfully" Mar 2 13:52:26.640748 containerd[1564]: time="2026-03-02T13:52:26.640537219Z" level=info msg="Ensure that sandbox 322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90 in task-service has been cleanup successfully" Mar 2 13:52:26.736542 containerd[1564]: time="2026-03-02T13:52:26.736308359Z" level=info msg="RemovePodSandbox \"322159910a2d5320ac0c6101a9f5a6241fc8772a617c1aa5e276ee08a244ac90\" returns successfully" Mar 2 13:52:26.738179 containerd[1564]: time="2026-03-02T13:52:26.738091440Z" level=info msg="StopPodSandbox for \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\"" Mar 2 13:52:26.749085 containerd[1564]: time="2026-03-02T13:52:26.749043597Z" level=info msg="TearDown network for sandbox \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" successfully" Mar 2 13:52:26.751937 containerd[1564]: time="2026-03-02T13:52:26.751528505Z" level=info msg="StopPodSandbox for \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" returns successfully" Mar 2 13:52:26.756486 containerd[1564]: time="2026-03-02T13:52:26.753898702Z" level=info msg="RemovePodSandbox for \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\"" Mar 2 13:52:26.756486 containerd[1564]: time="2026-03-02T13:52:26.754028424Z" level=info msg="Forcibly stopping sandbox \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\"" Mar 2 13:52:26.756486 containerd[1564]: time="2026-03-02T13:52:26.754142866Z" level=info msg="TearDown network for sandbox \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" successfully" Mar 2 13:52:26.769802 containerd[1564]: time="2026-03-02T13:52:26.768255639Z" level=info msg="Ensure that sandbox d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d in task-service has been cleanup successfully" Mar 2 13:52:26.784433 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 2 13:52:26.804990 containerd[1564]: time="2026-03-02T13:52:26.800102626Z" level=info msg="RemovePodSandbox \"d0bd4d111ee66d1273d4799fc3e837f92c83c815cfccdceab8175c53bbffde1d\" returns successfully" Mar 2 13:52:38.160491 kubelet[3001]: E0302 13:52:38.154083 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:39.621748 kubelet[3001]: E0302 13:52:39.618526 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:40.616195 kubelet[3001]: E0302 13:52:40.616150 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:47.623247 systemd-networkd[1454]: lxc_health: Link UP Mar 2 13:52:47.624151 systemd-networkd[1454]: lxc_health: Gained carrier Mar 2 13:52:48.183281 kubelet[3001]: E0302 13:52:48.161180 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:48.871130 kubelet[3001]: E0302 13:52:48.864950 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:49.742928 systemd-networkd[1454]: lxc_health: Gained IPv6LL Mar 2 13:52:49.994911 kubelet[3001]: E0302 13:52:49.993883 3001 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52430->127.0.0.1:44029: write tcp 127.0.0.1:52430->127.0.0.1:44029: write: broken pipe Mar 2 13:52:51.611765 kubelet[3001]: E0302 13:52:51.611284 3001 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:52:54.642837 sshd[6437]: Connection closed by 10.0.0.1 port 56650 Mar 2 13:52:54.617215 sshd-session[6419]: pam_unix(sshd:session): session closed for user core Mar 2 13:52:54.664154 systemd[1]: sshd@79-10.0.0.75:22-10.0.0.1:56650.service: Deactivated successfully. Mar 2 13:52:54.726462 systemd[1]: session-80.scope: Deactivated successfully. Mar 2 13:52:54.728539 systemd[1]: session-80.scope: Consumed 1.775s CPU time, 25.7M memory peak. Mar 2 13:52:54.766260 systemd-logind[1541]: Session 80 logged out. Waiting for processes to exit. Mar 2 13:52:54.839084 systemd-logind[1541]: Removed session 80.