Mar 2 12:51:27.348510 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 10:28:24 -00 2026 Mar 2 12:51:27.348545 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 12:51:27.348557 kernel: BIOS-provided physical RAM map: Mar 2 12:51:27.348569 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 12:51:27.348577 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 12:51:27.348585 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 12:51:27.348594 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 12:51:27.348603 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 12:51:27.348640 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 2 12:51:27.348650 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 2 12:51:27.348659 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 2 12:51:27.348669 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 2 12:51:27.348682 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 2 12:51:27.348692 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 2 12:51:27.348703 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 2 12:51:27.348713 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 12:51:27.348748 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 2 12:51:27.348762 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 2 12:51:27.348771 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 2 12:51:27.348779 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 2 12:51:27.348788 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 2 12:51:27.348796 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 12:51:27.348805 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 2 12:51:27.348814 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 12:51:27.348824 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 2 12:51:27.348834 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 12:51:27.348844 kernel: NX (Execute Disable) protection: active Mar 2 12:51:27.348854 kernel: APIC: Static calls initialized Mar 2 12:51:27.348929 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Mar 2 12:51:27.348940 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Mar 2 12:51:27.348949 kernel: extended physical RAM map: Mar 2 12:51:27.348959 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 12:51:27.348969 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 12:51:27.348980 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 12:51:27.348991 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 12:51:27.349001 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 12:51:27.349010 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 2 12:51:27.349018 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 2 12:51:27.349027 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Mar 2 12:51:27.349086 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Mar 2 12:51:27.349104 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Mar 2 12:51:27.349114 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Mar 2 12:51:27.349123 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Mar 2 12:51:27.349132 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 2 12:51:27.349145 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 2 12:51:27.349154 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 2 12:51:27.349164 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 2 12:51:27.349175 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 12:51:27.349186 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 2 12:51:27.349196 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 2 12:51:27.349207 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 2 12:51:27.349217 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 2 12:51:27.349228 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 2 12:51:27.349238 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 12:51:27.349249 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 2 12:51:27.349265 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 12:51:27.349274 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 2 12:51:27.349283 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 12:51:27.349319 kernel: efi: EFI v2.7 by EDK II Mar 2 12:51:27.349331 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Mar 2 12:51:27.349361 kernel: random: crng init done Mar 2 12:51:27.349371 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 2 12:51:27.349402 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 2 12:51:27.349412 kernel: secureboot: Secure boot disabled Mar 2 12:51:27.349421 kernel: SMBIOS 2.8 present. Mar 2 12:51:27.349430 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 2 12:51:27.349444 kernel: DMI: Memory slots populated: 1/1 Mar 2 12:51:27.349453 kernel: Hypervisor detected: KVM Mar 2 12:51:27.349463 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 2 12:51:27.350273 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 12:51:27.350290 kernel: kvm-clock: using sched offset of 24876453464 cycles Mar 2 12:51:27.350301 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 12:51:27.350311 kernel: tsc: Detected 2445.426 MHz processor Mar 2 12:51:27.350320 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 12:51:27.350331 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 12:51:27.350342 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 2 12:51:27.350353 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 2 12:51:27.350369 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 12:51:27.350380 kernel: Using GB pages for direct mapping Mar 2 12:51:27.350392 kernel: ACPI: Early table checksum verification disabled Mar 2 12:51:27.350404 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 2 12:51:27.350413 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 2 12:51:27.350423 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:51:27.350432 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:51:27.350442 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 2 12:51:27.350456 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:51:27.350466 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:51:27.350478 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:51:27.350489 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:51:27.350499 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 2 12:51:27.350508 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 2 12:51:27.350517 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 2 12:51:27.350527 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 2 12:51:27.350536 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 2 12:51:27.350551 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 2 12:51:27.350563 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 2 12:51:27.350574 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 2 12:51:27.350583 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 2 12:51:27.350592 kernel: No NUMA configuration found Mar 2 12:51:27.350602 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 2 12:51:27.350611 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Mar 2 12:51:27.350621 kernel: Zone ranges: Mar 2 12:51:27.350631 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 12:51:27.350646 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 2 12:51:27.350657 kernel: Normal empty Mar 2 12:51:27.350667 kernel: Device empty Mar 2 12:51:27.350678 kernel: Movable zone start for each node Mar 2 12:51:27.350689 kernel: Early memory node ranges Mar 2 12:51:27.350699 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 2 12:51:27.350736 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 2 12:51:27.350748 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 2 12:51:27.350758 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 2 12:51:27.350769 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 2 12:51:27.350783 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 2 12:51:27.350794 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Mar 2 12:51:27.350804 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Mar 2 12:51:27.350815 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 2 12:51:27.350849 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 12:51:27.350929 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 2 12:51:27.350944 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 2 12:51:27.350955 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 12:51:27.350966 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 2 12:51:27.350977 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 2 12:51:27.350988 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 2 12:51:27.351000 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 2 12:51:27.351014 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 2 12:51:27.351026 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 12:51:27.351037 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 12:51:27.351084 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 12:51:27.351098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 12:51:27.351108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 12:51:27.351118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 12:51:27.351128 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 12:51:27.351140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 12:51:27.351152 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 12:51:27.351161 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 12:51:27.351171 kernel: TSC deadline timer available Mar 2 12:51:27.351181 kernel: CPU topo: Max. logical packages: 1 Mar 2 12:51:27.351194 kernel: CPU topo: Max. logical dies: 1 Mar 2 12:51:27.351204 kernel: CPU topo: Max. dies per package: 1 Mar 2 12:51:27.351216 kernel: CPU topo: Max. threads per core: 1 Mar 2 12:51:27.351227 kernel: CPU topo: Num. cores per package: 4 Mar 2 12:51:27.351238 kernel: CPU topo: Num. threads per package: 4 Mar 2 12:51:27.351249 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 2 12:51:27.351261 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 12:51:27.351273 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 12:51:27.351283 kernel: kvm-guest: setup PV sched yield Mar 2 12:51:27.351293 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 2 12:51:27.351307 kernel: Booting paravirtualized kernel on KVM Mar 2 12:51:27.351317 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 12:51:27.351327 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 12:51:27.351338 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 2 12:51:27.351350 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 2 12:51:27.351361 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 12:51:27.351372 kernel: kvm-guest: PV spinlocks enabled Mar 2 12:51:27.351383 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 12:51:27.351422 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 12:51:27.351439 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 12:51:27.351451 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 12:51:27.351461 kernel: Fallback order for Node 0: 0 Mar 2 12:51:27.351470 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Mar 2 12:51:27.351480 kernel: Policy zone: DMA32 Mar 2 12:51:27.351490 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 12:51:27.351499 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 12:51:27.351509 kernel: ftrace: allocating 40099 entries in 157 pages Mar 2 12:51:27.351527 kernel: ftrace: allocated 157 pages with 5 groups Mar 2 12:51:27.351538 kernel: Dynamic Preempt: voluntary Mar 2 12:51:27.351548 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 12:51:27.351563 kernel: rcu: RCU event tracing is enabled. Mar 2 12:51:27.351574 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 12:51:27.351583 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 12:51:27.351594 kernel: Rude variant of Tasks RCU enabled. Mar 2 12:51:27.351606 kernel: Tracing variant of Tasks RCU enabled. Mar 2 12:51:27.351617 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 12:51:27.351632 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 12:51:27.351671 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:51:27.351684 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:51:27.351696 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:51:27.351709 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 12:51:27.351718 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 12:51:27.351728 kernel: Console: colour dummy device 80x25 Mar 2 12:51:27.351738 kernel: printk: legacy console [ttyS0] enabled Mar 2 12:51:27.351748 kernel: ACPI: Core revision 20240827 Mar 2 12:51:27.351762 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 12:51:27.351773 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 12:51:27.351785 kernel: x2apic enabled Mar 2 12:51:27.351796 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 12:51:27.351807 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 12:51:27.351819 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 12:51:27.351830 kernel: kvm-guest: setup PV IPIs Mar 2 12:51:27.351842 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 12:51:27.351853 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 2 12:51:27.351927 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 2 12:51:27.351939 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 12:51:27.351948 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 12:51:27.351958 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 12:51:27.351968 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 12:51:27.351978 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 12:51:27.351988 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 12:51:27.351998 kernel: Speculative Store Bypass: Vulnerable Mar 2 12:51:27.352011 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 12:51:27.352028 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 12:51:27.352088 kernel: active return thunk: srso_alias_return_thunk Mar 2 12:51:27.352103 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 12:51:27.352117 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 12:51:27.352127 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 12:51:27.352137 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 12:51:27.352147 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 12:51:27.352156 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 12:51:27.352171 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 12:51:27.352182 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 12:51:27.352194 kernel: Freeing SMP alternatives memory: 32K Mar 2 12:51:27.352205 kernel: pid_max: default: 32768 minimum: 301 Mar 2 12:51:27.352216 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 2 12:51:27.352227 kernel: landlock: Up and running. Mar 2 12:51:27.352239 kernel: SELinux: Initializing. Mar 2 12:51:27.352250 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 12:51:27.352262 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 12:51:27.352277 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 12:51:27.352290 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 12:51:27.352301 kernel: signal: max sigframe size: 1776 Mar 2 12:51:27.352312 kernel: rcu: Hierarchical SRCU implementation. Mar 2 12:51:27.352322 kernel: rcu: Max phase no-delay instances is 400. Mar 2 12:51:27.352332 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 2 12:51:27.352341 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 12:51:27.352351 kernel: smp: Bringing up secondary CPUs ... Mar 2 12:51:27.352362 kernel: smpboot: x86: Booting SMP configuration: Mar 2 12:51:27.352378 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 12:51:27.352390 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 12:51:27.352401 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 2 12:51:27.352413 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46192K init, 2568K bss, 145388K reserved, 0K cma-reserved) Mar 2 12:51:27.352424 kernel: devtmpfs: initialized Mar 2 12:51:27.352436 kernel: x86/mm: Memory block size: 128MB Mar 2 12:51:27.352449 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 2 12:51:27.352459 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 2 12:51:27.352469 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 2 12:51:27.352484 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 2 12:51:27.352493 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Mar 2 12:51:27.352503 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 2 12:51:27.352516 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 12:51:27.352529 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 12:51:27.352539 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 12:51:27.352549 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 12:51:27.352559 kernel: audit: initializing netlink subsys (disabled) Mar 2 12:51:27.352568 kernel: audit: type=2000 audit(1772455877.981:1): state=initialized audit_enabled=0 res=1 Mar 2 12:51:27.352583 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 12:51:27.352593 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 12:51:27.352604 kernel: cpuidle: using governor menu Mar 2 12:51:27.352616 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 12:51:27.352627 kernel: dca service started, version 1.12.1 Mar 2 12:51:27.352638 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 2 12:51:27.352650 kernel: PCI: Using configuration type 1 for base access Mar 2 12:51:27.352661 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 12:51:27.352673 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 12:51:27.352688 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 12:51:27.352701 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 12:51:27.352712 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 12:51:27.352721 kernel: ACPI: Added _OSI(Module Device) Mar 2 12:51:27.352731 kernel: ACPI: Added _OSI(Processor Device) Mar 2 12:51:27.352741 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 12:51:27.352751 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 12:51:27.352761 kernel: ACPI: Interpreter enabled Mar 2 12:51:27.352771 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 12:51:27.352787 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 12:51:27.352799 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 12:51:27.352810 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 12:51:27.352821 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 12:51:27.352832 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 12:51:27.353368 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 12:51:27.353608 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 12:51:27.353851 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 12:51:27.353933 kernel: PCI host bridge to bus 0000:00 Mar 2 12:51:27.354244 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 12:51:27.354427 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 12:51:27.354611 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 12:51:27.354793 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 2 12:51:27.355105 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 2 12:51:27.355296 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 2 12:51:27.355516 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 12:51:27.356031 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 2 12:51:27.356368 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 2 12:51:27.356564 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Mar 2 12:51:27.356757 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Mar 2 12:51:27.357031 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 2 12:51:27.357281 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 12:51:27.357547 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 2 12:51:27.357745 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Mar 2 12:51:27.358174 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Mar 2 12:51:27.358462 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Mar 2 12:51:27.358740 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 2 12:51:27.359004 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Mar 2 12:51:27.359250 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Mar 2 12:51:27.359447 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Mar 2 12:51:27.359773 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 2 12:51:27.360114 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Mar 2 12:51:27.360316 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Mar 2 12:51:27.360562 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 2 12:51:27.360768 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Mar 2 12:51:27.361193 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 2 12:51:27.361395 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 12:51:27.361666 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 2 12:51:27.361964 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Mar 2 12:51:27.362251 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Mar 2 12:51:27.362529 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 2 12:51:27.362770 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Mar 2 12:51:27.362792 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 12:51:27.362804 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 12:51:27.362814 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 12:51:27.362824 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 12:51:27.362834 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 12:51:27.362844 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 12:51:27.362853 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 12:51:27.362980 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 12:51:27.362993 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 12:51:27.363006 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 12:51:27.363016 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 12:51:27.363026 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 12:51:27.363036 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 12:51:27.363102 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 12:51:27.363114 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 12:51:27.363123 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 12:51:27.363138 kernel: iommu: Default domain type: Translated Mar 2 12:51:27.363148 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 12:51:27.363158 kernel: efivars: Registered efivars operations Mar 2 12:51:27.363168 kernel: PCI: Using ACPI for IRQ routing Mar 2 12:51:27.363179 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 12:51:27.363191 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 2 12:51:27.363203 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 2 12:51:27.363214 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Mar 2 12:51:27.363224 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Mar 2 12:51:27.363238 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 2 12:51:27.363248 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 2 12:51:27.363258 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Mar 2 12:51:27.363268 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 2 12:51:27.363468 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 12:51:27.363662 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 12:51:27.364146 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 12:51:27.364164 kernel: vgaarb: loaded Mar 2 12:51:27.364181 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 12:51:27.364193 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 12:51:27.364206 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 12:51:27.364218 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 12:51:27.364231 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 12:51:27.364241 kernel: pnp: PnP ACPI init Mar 2 12:51:27.364815 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 2 12:51:27.364835 kernel: pnp: PnP ACPI: found 6 devices Mar 2 12:51:27.364851 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 12:51:27.364951 kernel: NET: Registered PF_INET protocol family Mar 2 12:51:27.364963 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 12:51:27.364973 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 12:51:27.364983 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 12:51:27.364995 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 12:51:27.365087 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 12:51:27.365104 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 12:51:27.365119 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 12:51:27.365134 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 12:51:27.365145 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 12:51:27.365155 kernel: NET: Registered PF_XDP protocol family Mar 2 12:51:27.365362 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Mar 2 12:51:27.365565 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Mar 2 12:51:27.367544 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 12:51:27.367693 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 12:51:27.367827 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 12:51:27.368162 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 2 12:51:27.368303 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 2 12:51:27.368433 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 2 12:51:27.368444 kernel: PCI: CLS 0 bytes, default 64 Mar 2 12:51:27.368453 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 2 12:51:27.368461 kernel: Initialise system trusted keyrings Mar 2 12:51:27.368468 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 12:51:27.368476 kernel: Key type asymmetric registered Mar 2 12:51:27.368489 kernel: Asymmetric key parser 'x509' registered Mar 2 12:51:27.368497 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 2 12:51:27.368504 kernel: io scheduler mq-deadline registered Mar 2 12:51:27.368512 kernel: io scheduler kyber registered Mar 2 12:51:27.368519 kernel: io scheduler bfq registered Mar 2 12:51:27.368527 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 12:51:27.368536 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 12:51:27.368544 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 12:51:27.368552 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 12:51:27.368562 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 12:51:27.368610 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 12:51:27.368619 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 12:51:27.368627 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 12:51:27.368634 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 12:51:27.368962 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 12:51:27.368983 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 12:51:27.369217 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 12:51:27.369363 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T12:51:26 UTC (1772455886) Mar 2 12:51:27.369499 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 2 12:51:27.369509 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 12:51:27.369517 kernel: efifb: probing for efifb Mar 2 12:51:27.369525 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 2 12:51:27.369532 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 2 12:51:27.369545 kernel: efifb: scrolling: redraw Mar 2 12:51:27.369552 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 2 12:51:27.369560 kernel: Console: switching to colour frame buffer device 160x50 Mar 2 12:51:27.369567 kernel: fb0: EFI VGA frame buffer device Mar 2 12:51:27.369575 kernel: pstore: Using crash dump compression: deflate Mar 2 12:51:27.369582 kernel: pstore: Registered efi_pstore as persistent store backend Mar 2 12:51:27.369589 kernel: NET: Registered PF_INET6 protocol family Mar 2 12:51:27.369597 kernel: Segment Routing with IPv6 Mar 2 12:51:27.369604 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 12:51:27.369614 kernel: NET: Registered PF_PACKET protocol family Mar 2 12:51:27.369622 kernel: Key type dns_resolver registered Mar 2 12:51:27.369629 kernel: IPI shorthand broadcast: enabled Mar 2 12:51:27.369637 kernel: sched_clock: Marking stable (7792030826, 2701134571)->(11431217894, -938052497) Mar 2 12:51:27.369644 kernel: registered taskstats version 1 Mar 2 12:51:27.369652 kernel: Loading compiled-in X.509 certificates Mar 2 12:51:27.369659 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: ca052fea375a75b056ebd4154b64794dffb70b96' Mar 2 12:51:27.369667 kernel: Demotion targets for Node 0: null Mar 2 12:51:27.369674 kernel: Key type .fscrypt registered Mar 2 12:51:27.369691 kernel: Key type fscrypt-provisioning registered Mar 2 12:51:27.369705 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 12:51:27.369717 kernel: ima: Allocated hash algorithm: sha1 Mar 2 12:51:27.369727 kernel: ima: No architecture policies found Mar 2 12:51:27.369737 kernel: clk: Disabling unused clocks Mar 2 12:51:27.369747 kernel: Warning: unable to open an initial console. Mar 2 12:51:27.369758 kernel: Freeing unused kernel image (initmem) memory: 46192K Mar 2 12:51:27.369773 kernel: Write protecting the kernel read-only data: 40960k Mar 2 12:51:27.369785 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 2 12:51:27.369801 kernel: Run /init as init process Mar 2 12:51:27.369814 kernel: with arguments: Mar 2 12:51:27.369826 kernel: /init Mar 2 12:51:27.369836 kernel: with environment: Mar 2 12:51:27.369846 kernel: HOME=/ Mar 2 12:51:27.369979 kernel: TERM=linux Mar 2 12:51:27.369997 systemd[1]: Successfully made /usr/ read-only. Mar 2 12:51:27.370016 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 2 12:51:27.370034 systemd[1]: Detected virtualization kvm. Mar 2 12:51:27.370120 systemd[1]: Detected architecture x86-64. Mar 2 12:51:27.370132 systemd[1]: Running in initrd. Mar 2 12:51:27.370142 systemd[1]: No hostname configured, using default hostname. Mar 2 12:51:27.370153 systemd[1]: Hostname set to . Mar 2 12:51:27.370163 systemd[1]: Initializing machine ID from VM UUID. Mar 2 12:51:27.370174 systemd[1]: Queued start job for default target initrd.target. Mar 2 12:51:27.370186 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:51:27.370203 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:51:27.370217 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 12:51:27.370231 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 12:51:27.370243 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 12:51:27.370255 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 12:51:27.370267 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 12:51:27.370287 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 12:51:27.370298 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:51:27.370310 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:51:27.370323 systemd[1]: Reached target paths.target - Path Units. Mar 2 12:51:27.370336 systemd[1]: Reached target slices.target - Slice Units. Mar 2 12:51:27.370345 systemd[1]: Reached target swap.target - Swaps. Mar 2 12:51:27.370353 systemd[1]: Reached target timers.target - Timer Units. Mar 2 12:51:27.370361 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 12:51:27.370369 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 12:51:27.370380 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 12:51:27.370388 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 2 12:51:27.370396 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:51:27.370404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 12:51:27.370412 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:51:27.370420 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 12:51:27.370427 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 12:51:27.370435 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 12:51:27.370446 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 12:51:27.370454 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 2 12:51:27.370462 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 12:51:27.370470 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 12:51:27.370478 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 12:51:27.370486 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:51:27.370494 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 12:51:27.370505 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:51:27.370513 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 12:51:27.370558 systemd-journald[200]: Collecting audit messages is disabled. Mar 2 12:51:27.370582 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 12:51:27.370592 systemd-journald[200]: Journal started Mar 2 12:51:27.370609 systemd-journald[200]: Runtime Journal (/run/log/journal/c5795d994e634dc192f2c9eb00cd9d27) is 6M, max 48.1M, 42.1M free. Mar 2 12:51:27.340407 systemd-modules-load[203]: Inserted module 'overlay' Mar 2 12:51:27.381680 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 12:51:27.379937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:51:27.385818 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:51:27.401501 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:51:27.413696 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 12:51:27.420090 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 12:51:27.430957 kernel: Bridge firewalling registered Mar 2 12:51:27.431029 systemd-modules-load[203]: Inserted module 'br_netfilter' Mar 2 12:51:27.436652 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 12:51:27.438308 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 12:51:27.442113 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:51:27.476525 systemd-tmpfiles[223]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 2 12:51:27.478101 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:51:27.485348 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:51:27.495081 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:51:27.497395 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:51:27.524349 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 12:51:27.532085 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 12:51:27.572525 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 12:51:27.600617 systemd-resolved[244]: Positive Trust Anchors: Mar 2 12:51:27.600657 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 12:51:27.600702 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 12:51:27.604597 systemd-resolved[244]: Defaulting to hostname 'linux'. Mar 2 12:51:27.607265 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 12:51:27.608774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:51:27.763993 kernel: SCSI subsystem initialized Mar 2 12:51:27.779086 kernel: Loading iSCSI transport class v2.0-870. Mar 2 12:51:27.846174 kernel: iscsi: registered transport (tcp) Mar 2 12:51:27.910353 kernel: iscsi: registered transport (qla4xxx) Mar 2 12:51:27.910689 kernel: QLogic iSCSI HBA Driver Mar 2 12:51:27.952369 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 12:51:27.982941 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 12:51:27.987102 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 12:51:28.093615 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 12:51:28.096687 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 12:51:28.276351 kernel: raid6: avx2x4 gen() 17746 MB/s Mar 2 12:51:28.296253 kernel: raid6: avx2x2 gen() 22521 MB/s Mar 2 12:51:28.317185 kernel: raid6: avx2x1 gen() 11178 MB/s Mar 2 12:51:28.317457 kernel: raid6: using algorithm avx2x2 gen() 22521 MB/s Mar 2 12:51:28.341134 kernel: raid6: .... xor() 17605 MB/s, rmw enabled Mar 2 12:51:28.341546 kernel: raid6: using avx2x2 recovery algorithm Mar 2 12:51:28.397675 kernel: xor: automatically using best checksumming function avx Mar 2 12:51:28.592391 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 12:51:28.616822 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 12:51:28.622809 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:51:28.675083 systemd-udevd[455]: Using default interface naming scheme 'v255'. Mar 2 12:51:28.684693 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:51:28.689735 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 12:51:28.736770 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Mar 2 12:51:28.789187 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 12:51:28.795092 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 12:51:28.944410 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:51:28.956979 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 12:51:29.007960 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 12:51:29.018018 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 12:51:29.026314 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 12:51:29.026342 kernel: GPT:9289727 != 19775487 Mar 2 12:51:29.026354 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 12:51:29.028552 kernel: GPT:9289727 != 19775487 Mar 2 12:51:29.033919 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 12:51:29.033945 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:51:29.056964 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 2 12:51:29.069716 kernel: libata version 3.00 loaded. Mar 2 12:51:29.069831 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 12:51:29.078982 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:51:29.079810 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:51:29.105737 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:51:29.122778 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:51:29.132169 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 12:51:29.141980 kernel: AES CTR mode by8 optimization enabled Mar 2 12:51:29.183924 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 12:51:29.186949 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 12:51:29.189258 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 12:51:29.206998 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 2 12:51:29.207338 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 2 12:51:29.207607 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 12:51:29.214957 kernel: scsi host0: ahci Mar 2 12:51:29.215273 kernel: scsi host1: ahci Mar 2 12:51:29.218419 kernel: scsi host2: ahci Mar 2 12:51:29.220962 kernel: scsi host3: ahci Mar 2 12:51:29.221281 kernel: scsi host4: ahci Mar 2 12:51:29.224082 kernel: scsi host5: ahci Mar 2 12:51:29.227948 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Mar 2 12:51:29.227983 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Mar 2 12:51:29.232702 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Mar 2 12:51:29.232743 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Mar 2 12:51:29.232762 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Mar 2 12:51:29.232779 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Mar 2 12:51:29.253168 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 12:51:29.271957 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 12:51:29.281492 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 12:51:29.295297 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 12:51:29.300006 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 12:51:29.306953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:51:29.307028 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:51:29.319191 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:51:29.336637 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:51:29.345557 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 12:51:29.352156 disk-uuid[626]: Primary Header is updated. Mar 2 12:51:29.352156 disk-uuid[626]: Secondary Entries is updated. Mar 2 12:51:29.352156 disk-uuid[626]: Secondary Header is updated. Mar 2 12:51:29.366996 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:51:29.378957 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:51:29.418617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:51:29.549614 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 12:51:29.549697 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 12:51:29.549938 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 12:51:29.552982 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 12:51:29.557986 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 12:51:29.558116 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 12:51:29.559974 kernel: ata3.00: LPM support broken, forcing max_power Mar 2 12:51:29.566845 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 12:51:29.567671 kernel: ata3.00: applying bridge limits Mar 2 12:51:29.573029 kernel: ata3.00: LPM support broken, forcing max_power Mar 2 12:51:29.575725 kernel: ata3.00: configured for UDMA/100 Mar 2 12:51:29.579992 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 12:51:29.642603 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 12:51:29.643316 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 12:51:29.675939 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 12:51:30.133036 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 12:51:30.135557 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 12:51:30.152797 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:51:30.159241 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 12:51:30.162280 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 12:51:30.196599 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 12:51:30.382986 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:51:30.383755 disk-uuid[627]: The operation has completed successfully. Mar 2 12:51:30.435515 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 12:51:30.435716 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 12:51:30.490681 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 12:51:30.528654 sh[661]: Success Mar 2 12:51:30.558844 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 12:51:30.558975 kernel: device-mapper: uevent: version 1.0.3 Mar 2 12:51:30.562946 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 2 12:51:30.576958 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 2 12:51:30.622805 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 12:51:30.632592 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 12:51:30.653688 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 12:51:30.676818 kernel: BTRFS: device fsid 760529e6-8e55-47fc-ad5a-c1c1d184e50a devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (673) Mar 2 12:51:30.676936 kernel: BTRFS info (device dm-0): first mount of filesystem 760529e6-8e55-47fc-ad5a-c1c1d184e50a Mar 2 12:51:30.676956 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:51:30.699792 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 2 12:51:30.699940 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 2 12:51:30.702497 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 12:51:30.706476 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 2 12:51:30.710989 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 12:51:30.712152 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 12:51:30.735109 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 12:51:30.785010 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (696) Mar 2 12:51:30.791108 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 12:51:30.792612 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:51:30.807495 kernel: BTRFS info (device vda6): turning on async discard Mar 2 12:51:30.807593 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 12:51:30.818093 kernel: BTRFS info (device vda6): last unmount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 12:51:30.827271 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 12:51:30.834376 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 12:51:31.185105 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 12:51:31.196580 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 12:51:31.235789 ignition[740]: Ignition 2.22.0 Mar 2 12:51:31.237140 ignition[740]: Stage: fetch-offline Mar 2 12:51:31.237213 ignition[740]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:51:31.237231 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:51:31.237361 ignition[740]: parsed url from cmdline: "" Mar 2 12:51:31.237367 ignition[740]: no config URL provided Mar 2 12:51:31.237376 ignition[740]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 12:51:31.237393 ignition[740]: no config at "/usr/lib/ignition/user.ign" Mar 2 12:51:31.237431 ignition[740]: op(1): [started] loading QEMU firmware config module Mar 2 12:51:31.237440 ignition[740]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 12:51:31.289118 ignition[740]: op(1): [finished] loading QEMU firmware config module Mar 2 12:51:31.306575 systemd-networkd[849]: lo: Link UP Mar 2 12:51:31.306601 systemd-networkd[849]: lo: Gained carrier Mar 2 12:51:31.308804 systemd-networkd[849]: Enumeration completed Mar 2 12:51:31.308961 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 12:51:31.313395 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:51:31.313404 systemd-networkd[849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 12:51:31.314343 systemd-networkd[849]: eth0: Link UP Mar 2 12:51:31.317628 systemd-networkd[849]: eth0: Gained carrier Mar 2 12:51:31.317643 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:51:31.318222 systemd[1]: Reached target network.target - Network. Mar 2 12:51:31.363194 systemd-networkd[849]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 12:51:31.548969 ignition[740]: parsing config with SHA512: dabf010020efbb409b2619480c6bd8bb16ba6231f004ad97be0028d3b6fbdffbd968ec23f189e92063c7987689cbf17cb9955ecbe14692fdfc5ddaf14529db13 Mar 2 12:51:31.564363 unknown[740]: fetched base config from "system" Mar 2 12:51:31.564400 unknown[740]: fetched user config from "qemu" Mar 2 12:51:31.565010 ignition[740]: fetch-offline: fetch-offline passed Mar 2 12:51:31.587541 systemd-resolved[244]: Detected conflict on linux IN A 10.0.0.17 Mar 2 12:51:31.565122 ignition[740]: Ignition finished successfully Mar 2 12:51:31.587556 systemd-resolved[244]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Mar 2 12:51:31.588808 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 12:51:31.607728 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 12:51:31.609185 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 12:51:31.691930 ignition[858]: Ignition 2.22.0 Mar 2 12:51:31.691968 ignition[858]: Stage: kargs Mar 2 12:51:31.692375 ignition[858]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:51:31.692396 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:51:31.694157 ignition[858]: kargs: kargs passed Mar 2 12:51:31.694224 ignition[858]: Ignition finished successfully Mar 2 12:51:31.705748 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 12:51:31.712255 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 12:51:31.914947 ignition[867]: Ignition 2.22.0 Mar 2 12:51:31.914974 ignition[867]: Stage: disks Mar 2 12:51:31.915191 ignition[867]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:51:31.915208 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:51:31.927642 ignition[867]: disks: disks passed Mar 2 12:51:31.927720 ignition[867]: Ignition finished successfully Mar 2 12:51:31.934438 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 12:51:31.935843 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 12:51:31.941767 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 12:51:31.948847 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 12:51:31.962789 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 12:51:31.970037 systemd[1]: Reached target basic.target - Basic System. Mar 2 12:51:31.975826 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 12:51:32.024234 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 2 12:51:32.031420 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 12:51:32.043575 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 12:51:32.383968 kernel: EXT4-fs (vda9): mounted filesystem 9d55f1a4-66ad-43d6-b325-f6b8d2d08c3e r/w with ordered data mode. Quota mode: none. Mar 2 12:51:32.385982 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 12:51:32.388933 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 12:51:32.398422 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 12:51:32.407834 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 12:51:32.411338 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 12:51:32.411406 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 12:51:32.411445 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 12:51:32.442934 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 12:51:32.460744 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Mar 2 12:51:32.460776 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 12:51:32.460792 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:51:32.461191 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 12:51:32.496716 kernel: BTRFS info (device vda6): turning on async discard Mar 2 12:51:32.497035 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 12:51:32.500451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 12:51:32.646324 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 12:51:32.656266 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Mar 2 12:51:32.664955 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 12:51:32.677586 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 12:51:32.860161 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 12:51:32.882298 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 12:51:32.884493 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 12:51:32.925590 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 12:51:32.932190 kernel: BTRFS info (device vda6): last unmount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 12:51:32.941809 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 12:51:33.089966 systemd-networkd[849]: eth0: Gained IPv6LL Mar 2 12:51:33.127655 ignition[1000]: INFO : Ignition 2.22.0 Mar 2 12:51:33.127655 ignition[1000]: INFO : Stage: mount Mar 2 12:51:33.133549 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:51:33.133549 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:51:33.133549 ignition[1000]: INFO : mount: mount passed Mar 2 12:51:33.133549 ignition[1000]: INFO : Ignition finished successfully Mar 2 12:51:33.151800 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 12:51:33.161140 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 12:51:33.406419 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 12:51:33.487380 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Mar 2 12:51:33.487725 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 12:51:33.493948 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:51:33.502007 kernel: BTRFS info (device vda6): turning on async discard Mar 2 12:51:33.502171 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 12:51:33.505330 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 12:51:33.651666 ignition[1028]: INFO : Ignition 2.22.0 Mar 2 12:51:33.651666 ignition[1028]: INFO : Stage: files Mar 2 12:51:33.660302 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:51:33.660302 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:51:33.681785 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Mar 2 12:51:33.688412 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 12:51:33.688412 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 12:51:33.699234 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 12:51:33.699234 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 12:51:33.709544 unknown[1028]: wrote ssh authorized keys file for user: core Mar 2 12:51:33.714336 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 12:51:33.723593 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 12:51:33.723593 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 12:51:33.824167 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 12:51:33.971841 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 12:51:33.971841 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 12:51:33.971841 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 2 12:51:34.183239 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 2 12:51:35.070651 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 12:51:35.070651 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 2 12:51:35.097539 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 12:51:35.097539 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 12:51:35.126463 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 12:51:35.126463 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 12:51:35.142593 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 12:51:35.142593 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 12:51:35.191130 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 12:51:35.230110 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 12:51:35.237438 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 12:51:35.237438 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 12:51:35.255846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 12:51:35.255846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 12:51:35.283321 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 2 12:51:35.717733 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 2 12:51:37.298005 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 12:51:37.298005 ignition[1028]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 2 12:51:37.309733 ignition[1028]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 12:51:37.322433 ignition[1028]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 12:51:37.322433 ignition[1028]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 2 12:51:37.322433 ignition[1028]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 2 12:51:37.334673 ignition[1028]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 12:51:37.334673 ignition[1028]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 12:51:37.334673 ignition[1028]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 2 12:51:37.334673 ignition[1028]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 12:51:37.413580 ignition[1028]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 12:51:37.421415 ignition[1028]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 12:51:37.428151 ignition[1028]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 12:51:37.428151 ignition[1028]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 2 12:51:37.438429 ignition[1028]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 12:51:37.438429 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 12:51:37.438429 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 12:51:37.438429 ignition[1028]: INFO : files: files passed Mar 2 12:51:37.438429 ignition[1028]: INFO : Ignition finished successfully Mar 2 12:51:37.444623 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 12:51:37.460751 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 12:51:37.465999 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 12:51:37.516803 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 12:51:37.520459 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 12:51:37.521070 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 12:51:37.531356 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:51:37.531356 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:51:37.540533 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:51:37.562174 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 12:51:37.577248 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 12:51:37.590488 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 12:51:37.707523 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 12:51:37.707706 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 12:51:37.715441 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 12:51:37.718843 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 12:51:37.728149 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 12:51:37.729357 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 12:51:37.790547 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 12:51:37.807521 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 12:51:37.875182 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:51:37.884447 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:51:37.887062 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 12:51:37.894964 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 12:51:37.895246 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 12:51:37.905639 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 12:51:37.907668 systemd[1]: Stopped target basic.target - Basic System. Mar 2 12:51:37.914845 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 12:51:37.920006 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 12:51:37.925715 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 12:51:37.931778 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 2 12:51:37.942575 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 12:51:37.948630 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 12:51:37.952336 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 12:51:37.958251 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 12:51:37.963392 systemd[1]: Stopped target swap.target - Swaps. Mar 2 12:51:37.965490 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 12:51:37.965705 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 12:51:37.981223 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:51:37.987760 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:51:37.994612 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 12:51:38.001338 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:51:38.002490 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 12:51:38.002629 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 12:51:38.014201 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 12:51:38.014402 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 12:51:38.019846 systemd[1]: Stopped target paths.target - Path Units. Mar 2 12:51:38.025052 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 12:51:38.032118 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:51:38.033621 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 12:51:38.043176 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 12:51:38.051422 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 12:51:38.051682 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 12:51:38.053916 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 12:51:38.054024 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 12:51:38.069601 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 12:51:38.069736 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 12:51:38.074852 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 12:51:38.075166 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 12:51:38.098720 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 12:51:38.106635 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 12:51:38.108528 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 12:51:38.108735 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:51:38.112852 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 12:51:38.119566 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 12:51:38.151638 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 12:51:38.151834 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 12:51:38.204605 kernel: hrtimer: interrupt took 2422332 ns Mar 2 12:51:38.217559 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 12:51:38.228231 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 12:51:38.228478 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 12:51:38.238108 ignition[1083]: INFO : Ignition 2.22.0 Mar 2 12:51:38.238108 ignition[1083]: INFO : Stage: umount Mar 2 12:51:38.238108 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:51:38.238108 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:51:38.238108 ignition[1083]: INFO : umount: umount passed Mar 2 12:51:38.238108 ignition[1083]: INFO : Ignition finished successfully Mar 2 12:51:38.254034 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 12:51:38.254262 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 12:51:38.256388 systemd[1]: Stopped target network.target - Network. Mar 2 12:51:38.261797 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 12:51:38.261977 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 12:51:38.270642 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 12:51:38.270718 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 12:51:38.276993 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 12:51:38.277070 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 12:51:38.291394 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 12:51:38.291462 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 12:51:38.298854 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 12:51:38.298984 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 12:51:38.306555 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 12:51:38.313264 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 12:51:38.341571 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 12:51:38.341832 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 12:51:38.354416 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 2 12:51:38.354688 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 12:51:38.354837 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 12:51:38.366154 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 2 12:51:38.366824 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 2 12:51:38.371212 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 12:51:38.371282 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:51:38.385209 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 12:51:38.386522 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 12:51:38.386582 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 12:51:38.394779 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 12:51:38.394834 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:51:38.407255 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 12:51:38.407310 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 12:51:38.418971 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 12:51:38.419174 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:51:38.427965 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:51:38.433198 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 2 12:51:38.433273 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 2 12:51:38.457773 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 12:51:38.459155 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:51:38.464586 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 12:51:38.464638 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 12:51:38.466921 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 12:51:38.466965 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:51:38.480647 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 12:51:38.480711 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 12:51:38.488362 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 12:51:38.488419 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 12:51:38.493051 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 12:51:38.493147 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:51:38.505345 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 12:51:38.510566 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 2 12:51:38.510650 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 12:51:38.519991 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 12:51:38.520063 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:51:38.530031 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 12:51:38.530117 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:51:38.537699 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 12:51:38.537751 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:51:38.543799 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:51:38.543930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:51:38.552027 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 2 12:51:38.552194 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 2 12:51:38.552271 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 2 12:51:38.552370 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 12:51:38.553223 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 12:51:38.553399 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 12:51:38.571608 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 12:51:38.571792 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 12:51:38.576462 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 12:51:38.583598 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 12:51:38.630422 systemd[1]: Switching root. Mar 2 12:51:38.725696 systemd-journald[200]: Journal stopped Mar 2 12:51:40.887062 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Mar 2 12:51:40.887207 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 12:51:40.887234 kernel: SELinux: policy capability open_perms=1 Mar 2 12:51:40.887251 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 12:51:40.887269 kernel: SELinux: policy capability always_check_network=0 Mar 2 12:51:40.887284 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 12:51:40.887299 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 12:51:40.887314 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 12:51:40.887376 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 12:51:40.887399 kernel: SELinux: policy capability userspace_initial_context=0 Mar 2 12:51:40.887415 kernel: audit: type=1403 audit(1772455899.075:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 12:51:40.887436 systemd[1]: Successfully loaded SELinux policy in 91.778ms. Mar 2 12:51:40.887466 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.692ms. Mar 2 12:51:40.887485 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 2 12:51:40.887502 systemd[1]: Detected virtualization kvm. Mar 2 12:51:40.887519 systemd[1]: Detected architecture x86-64. Mar 2 12:51:40.887540 systemd[1]: Detected first boot. Mar 2 12:51:40.887569 systemd[1]: Initializing machine ID from VM UUID. Mar 2 12:51:40.887585 zram_generator::config[1130]: No configuration found. Mar 2 12:51:40.887602 kernel: Guest personality initialized and is inactive Mar 2 12:51:40.887618 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 2 12:51:40.887636 kernel: Initialized host personality Mar 2 12:51:40.887652 kernel: NET: Registered PF_VSOCK protocol family Mar 2 12:51:40.887667 systemd[1]: Populated /etc with preset unit settings. Mar 2 12:51:40.887685 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 2 12:51:40.887701 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 12:51:40.887726 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 12:51:40.887742 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 12:51:40.887759 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 12:51:40.887822 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 12:51:40.887840 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 12:51:40.887916 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 12:51:40.887938 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 12:51:40.887955 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 12:51:40.887977 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 12:51:40.887997 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 12:51:40.888013 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:51:40.888030 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:51:40.888046 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 12:51:40.888062 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 12:51:40.888082 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 12:51:40.888144 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 12:51:40.888173 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 12:51:40.888190 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:51:40.888206 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:51:40.888222 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 12:51:40.888238 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 12:51:40.888255 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 12:51:40.888273 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 12:51:40.888292 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:51:40.888371 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 12:51:40.888399 systemd[1]: Reached target slices.target - Slice Units. Mar 2 12:51:40.888423 systemd[1]: Reached target swap.target - Swaps. Mar 2 12:51:40.888442 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 12:51:40.888458 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 12:51:40.888474 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 2 12:51:40.888490 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:51:40.888507 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 12:51:40.888526 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:51:40.888543 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 12:51:40.888564 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 12:51:40.888580 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 12:51:40.888597 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 12:51:40.888617 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:51:40.888634 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 12:51:40.888651 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 12:51:40.888666 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 12:51:40.888683 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 12:51:40.888702 systemd[1]: Reached target machines.target - Containers. Mar 2 12:51:40.888726 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 12:51:40.888746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:51:40.888816 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 12:51:40.888839 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 12:51:40.889161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:51:40.889188 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 12:51:40.889206 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:51:40.889225 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 12:51:40.889249 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:51:40.889269 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 12:51:40.889287 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 12:51:40.889304 kernel: loop: module loaded Mar 2 12:51:40.889323 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 12:51:40.889341 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 12:51:40.889359 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 12:51:40.889377 kernel: fuse: init (API version 7.41) Mar 2 12:51:40.889403 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 12:51:40.889431 kernel: ACPI: bus type drm_connector registered Mar 2 12:51:40.889448 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 12:51:40.889465 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 12:51:40.889481 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 12:51:40.889502 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 12:51:40.889564 systemd-journald[1215]: Collecting audit messages is disabled. Mar 2 12:51:40.889652 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 2 12:51:40.889672 systemd-journald[1215]: Journal started Mar 2 12:51:40.889701 systemd-journald[1215]: Runtime Journal (/run/log/journal/c5795d994e634dc192f2c9eb00cd9d27) is 6M, max 48.1M, 42.1M free. Mar 2 12:51:40.246146 systemd[1]: Queued start job for default target multi-user.target. Mar 2 12:51:40.263069 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 12:51:40.263932 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 12:51:40.264747 systemd[1]: systemd-journald.service: Consumed 1.514s CPU time. Mar 2 12:51:40.895987 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 12:51:40.903658 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 12:51:40.903708 systemd[1]: Stopped verity-setup.service. Mar 2 12:51:40.911967 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:51:40.917956 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 12:51:40.920708 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 12:51:40.924157 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 12:51:40.927549 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 12:51:40.930432 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 12:51:40.933777 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 12:51:40.938069 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 12:51:40.944300 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 12:51:40.948768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:51:40.957166 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 12:51:40.957845 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 12:51:40.964798 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:51:40.965398 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:51:40.973633 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 12:51:40.974508 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 12:51:40.978278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:51:40.978637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:51:40.982646 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 12:51:40.983092 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 12:51:40.989306 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:51:40.989767 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:51:40.993849 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 12:51:40.998290 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 12:51:41.002941 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 12:51:41.007722 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 2 12:51:41.025378 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 12:51:41.031038 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 12:51:41.036234 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 12:51:41.037788 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 12:51:41.037988 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 12:51:41.044531 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 2 12:51:41.055295 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 12:51:41.058487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:51:41.060423 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 12:51:41.066997 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 12:51:41.075312 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 12:51:41.079058 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 12:51:41.083087 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 12:51:41.086223 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:51:41.093550 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 12:51:41.110992 systemd-journald[1215]: Time spent on flushing to /var/log/journal/c5795d994e634dc192f2c9eb00cd9d27 is 55.945ms for 1074 entries. Mar 2 12:51:41.110992 systemd-journald[1215]: System Journal (/var/log/journal/c5795d994e634dc192f2c9eb00cd9d27) is 8M, max 195.6M, 187.6M free. Mar 2 12:51:41.335807 systemd-journald[1215]: Received client request to flush runtime journal. Mar 2 12:51:41.335853 kernel: loop0: detected capacity change from 0 to 110984 Mar 2 12:51:41.109538 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 12:51:41.119022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:51:41.124486 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 12:51:41.128269 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 12:51:41.135469 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 12:51:41.332935 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 12:51:41.341032 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 2 12:51:41.345926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 12:51:41.366010 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 12:51:41.373066 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:51:41.377298 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Mar 2 12:51:41.377316 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Mar 2 12:51:41.385386 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:51:41.394476 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 12:51:41.402211 kernel: loop1: detected capacity change from 0 to 219192 Mar 2 12:51:41.418584 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 12:51:41.419671 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 2 12:51:41.457645 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 12:51:41.457965 kernel: loop2: detected capacity change from 0 to 128560 Mar 2 12:51:41.464192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 12:51:41.509176 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Mar 2 12:51:41.509213 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Mar 2 12:51:41.514928 kernel: loop3: detected capacity change from 0 to 110984 Mar 2 12:51:41.515461 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:51:41.542905 kernel: loop4: detected capacity change from 0 to 219192 Mar 2 12:51:41.572908 kernel: loop5: detected capacity change from 0 to 128560 Mar 2 12:51:41.680590 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 12:51:41.681530 (sd-merge)[1275]: Merged extensions into '/usr'. Mar 2 12:51:41.690034 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 12:51:41.690056 systemd[1]: Reloading... Mar 2 12:51:41.823094 zram_generator::config[1299]: No configuration found. Mar 2 12:51:42.159734 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 12:51:42.201817 systemd[1]: Reloading finished in 510 ms. Mar 2 12:51:42.243234 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 12:51:42.248401 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 12:51:42.314479 systemd[1]: Starting ensure-sysext.service... Mar 2 12:51:42.320598 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 12:51:42.359059 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 12:51:42.367576 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 2 12:51:42.367620 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 2 12:51:42.368511 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 12:51:42.368850 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 12:51:42.370082 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 12:51:42.370220 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:51:42.370386 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Mar 2 12:51:42.370458 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Mar 2 12:51:42.374238 systemd[1]: Reload requested from client PID 1340 ('systemctl') (unit ensure-sysext.service)... Mar 2 12:51:42.374290 systemd[1]: Reloading... Mar 2 12:51:42.376788 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 12:51:42.377010 systemd-tmpfiles[1341]: Skipping /boot Mar 2 12:51:42.392451 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 12:51:42.392528 systemd-tmpfiles[1341]: Skipping /boot Mar 2 12:51:42.445976 zram_generator::config[1366]: No configuration found. Mar 2 12:51:42.456429 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Mar 2 12:51:42.785970 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 12:51:42.835172 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 2 12:51:42.840959 kernel: ACPI: button: Power Button [PWRF] Mar 2 12:51:42.868914 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 2 12:51:42.871902 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 12:51:42.876047 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 12:51:42.946471 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 12:51:42.946732 systemd[1]: Reloading finished in 571 ms. Mar 2 12:51:42.957290 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:51:42.962828 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:51:43.031321 systemd[1]: Finished ensure-sysext.service. Mar 2 12:51:43.145223 kernel: kvm_amd: TSC scaling supported Mar 2 12:51:43.145738 kernel: kvm_amd: Nested Virtualization enabled Mar 2 12:51:43.145771 kernel: kvm_amd: Nested Paging enabled Mar 2 12:51:43.147939 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 12:51:43.147984 kernel: kvm_amd: PMU virtualization is disabled Mar 2 12:51:43.197546 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 12:51:43.205054 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:51:43.207820 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 12:51:43.216045 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 12:51:43.219608 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:51:43.226939 kernel: EDAC MC: Ver: 3.0.0 Mar 2 12:51:43.228299 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:51:43.234995 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 12:51:43.244365 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:51:43.249817 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:51:43.254954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:51:43.261092 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 12:51:43.266792 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 12:51:43.285752 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 12:51:43.308079 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 12:51:43.316919 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 12:51:43.324190 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 12:51:43.331111 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 12:51:43.336506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:51:43.339418 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:51:43.341492 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:51:43.346218 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:51:43.351359 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 12:51:43.352934 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 12:51:43.358190 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:51:43.358773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:51:43.364341 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:51:43.364771 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:51:43.372257 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 12:51:43.437092 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 12:51:43.441941 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 12:51:43.448802 augenrules[1495]: No rules Mar 2 12:51:43.449673 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 12:51:43.450230 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 12:51:43.462598 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 12:51:43.465949 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 12:51:43.466154 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 12:51:43.468675 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 12:51:43.477241 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 12:51:43.482179 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 12:51:43.560345 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 12:51:43.599083 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:51:43.606053 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 12:51:43.707469 systemd-resolved[1486]: Positive Trust Anchors: Mar 2 12:51:43.707490 systemd-resolved[1486]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 12:51:43.707534 systemd-resolved[1486]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 12:51:43.710742 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 12:51:43.716793 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 12:51:43.719689 systemd-resolved[1486]: Defaulting to hostname 'linux'. Mar 2 12:51:43.722484 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 12:51:43.730636 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:51:43.736604 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 12:51:43.740422 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 12:51:43.744350 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 12:51:43.748450 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 2 12:51:43.752245 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 12:51:43.755547 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 12:51:43.759411 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 12:51:43.762755 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 12:51:43.762808 systemd[1]: Reached target paths.target - Path Units. Mar 2 12:51:43.765227 systemd[1]: Reached target timers.target - Timer Units. Mar 2 12:51:43.771684 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 12:51:43.776769 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 12:51:43.782030 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 2 12:51:43.786979 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 2 12:51:43.791411 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 2 12:51:43.832748 systemd-networkd[1479]: lo: Link UP Mar 2 12:51:43.832796 systemd-networkd[1479]: lo: Gained carrier Mar 2 12:51:43.836587 systemd-networkd[1479]: Enumeration completed Mar 2 12:51:43.839153 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 12:51:43.839405 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:51:43.839412 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 12:51:43.840227 systemd-networkd[1479]: eth0: Link UP Mar 2 12:51:43.840550 systemd-networkd[1479]: eth0: Gained carrier Mar 2 12:51:43.840565 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:51:43.844629 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 2 12:51:43.857428 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 12:51:43.864280 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 12:51:43.873669 systemd-networkd[1479]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 12:51:43.880601 systemd[1]: Reached target network.target - Network. Mar 2 12:51:43.880607 systemd-timesyncd[1488]: Network configuration changed, trying to establish connection. Mar 2 12:51:44.664015 systemd-resolved[1486]: Clock change detected. Flushing caches. Mar 2 12:51:44.664107 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 12:51:44.673196 systemd[1]: Reached target basic.target - Basic System. Mar 2 12:51:44.673206 systemd-timesyncd[1488]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 12:51:44.673390 systemd-timesyncd[1488]: Initial clock synchronization to Mon 2026-03-02 12:51:44.663635 UTC. Mar 2 12:51:44.690926 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 12:51:44.691515 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 12:51:44.799802 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 12:51:44.804615 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 12:51:44.809262 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 12:51:44.816566 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 12:51:44.827385 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 12:51:44.831627 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 12:51:44.833036 jq[1531]: false Mar 2 12:51:44.834795 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 2 12:51:44.840726 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 12:51:44.847601 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 12:51:44.853795 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 12:51:44.854074 extend-filesystems[1532]: Found /dev/vda6 Mar 2 12:51:44.861597 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing passwd entry cache Mar 2 12:51:44.856810 oslogin_cache_refresh[1533]: Refreshing passwd entry cache Mar 2 12:51:44.862761 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 12:51:44.865634 extend-filesystems[1532]: Found /dev/vda9 Mar 2 12:51:44.870836 extend-filesystems[1532]: Checking size of /dev/vda9 Mar 2 12:51:44.875890 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 12:51:44.879230 oslogin_cache_refresh[1533]: Failure getting users, quitting Mar 2 12:51:44.879591 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting users, quitting Mar 2 12:51:44.879591 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 2 12:51:44.879591 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing group entry cache Mar 2 12:51:44.879252 oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 2 12:51:44.879313 oslogin_cache_refresh[1533]: Refreshing group entry cache Mar 2 12:51:44.884973 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 2 12:51:44.896517 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting groups, quitting Mar 2 12:51:44.896517 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 2 12:51:44.895769 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 12:51:44.893796 oslogin_cache_refresh[1533]: Failure getting groups, quitting Mar 2 12:51:44.893819 oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 2 12:51:44.900195 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 12:51:44.902839 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 12:51:44.903753 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 12:51:44.911016 extend-filesystems[1532]: Resized partition /dev/vda9 Mar 2 12:51:44.917142 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 12:51:44.924005 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 12:51:44.928381 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 12:51:44.930542 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 12:51:44.932801 extend-filesystems[1560]: resize2fs 1.47.3 (8-Jul-2025) Mar 2 12:51:44.931061 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 2 12:51:44.931414 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 2 12:51:44.934123 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 12:51:44.934554 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 12:51:44.959939 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 12:51:44.956245 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 12:51:44.957069 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 12:51:44.962025 update_engine[1558]: I20260302 12:51:44.961894 1558 main.cc:92] Flatcar Update Engine starting Mar 2 12:51:44.978339 jq[1559]: true Mar 2 12:51:44.998518 tar[1562]: linux-amd64/LICENSE Mar 2 12:51:44.998962 tar[1562]: linux-amd64/helm Mar 2 12:51:44.999655 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 12:51:45.007044 (ntainerd)[1568]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 12:51:45.007869 jq[1567]: true Mar 2 12:51:45.053486 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 12:51:45.062935 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 2 12:51:45.084554 dbus-daemon[1529]: [system] SELinux support is enabled Mar 2 12:51:45.086499 extend-filesystems[1560]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 12:51:45.086499 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 12:51:45.086499 extend-filesystems[1560]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 12:51:45.100583 extend-filesystems[1532]: Resized filesystem in /dev/vda9 Mar 2 12:51:45.096946 systemd-logind[1549]: Watching system buttons on /dev/input/event2 (Power Button) Mar 2 12:51:45.105885 update_engine[1558]: I20260302 12:51:45.089671 1558 update_check_scheduler.cc:74] Next update check in 9m18s Mar 2 12:51:45.096971 systemd-logind[1549]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 12:51:45.097792 systemd-logind[1549]: New seat seat0. Mar 2 12:51:45.105058 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 12:51:45.112630 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 12:51:45.117781 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 12:51:45.118199 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 12:51:45.132247 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 12:51:45.132325 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 12:51:45.138894 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 12:51:45.138928 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 12:51:45.166995 systemd[1]: Started update-engine.service - Update Engine. Mar 2 12:51:45.176603 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 2 12:51:45.197923 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 12:51:45.221180 bash[1597]: Updated "/home/core/.ssh/authorized_keys" Mar 2 12:51:45.224327 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 12:51:45.230997 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 12:51:45.447893 locksmithd[1593]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 12:51:45.888176 containerd[1568]: time="2026-03-02T12:51:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 2 12:51:45.890505 containerd[1568]: time="2026-03-02T12:51:45.890477718Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 2 12:51:45.925871 containerd[1568]: time="2026-03-02T12:51:45.925667125Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.556µs" Mar 2 12:51:45.925871 containerd[1568]: time="2026-03-02T12:51:45.925796937Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 2 12:51:45.925871 containerd[1568]: time="2026-03-02T12:51:45.925827615Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 2 12:51:45.926375 containerd[1568]: time="2026-03-02T12:51:45.926259250Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 2 12:51:45.926375 containerd[1568]: time="2026-03-02T12:51:45.926317489Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 2 12:51:45.926375 containerd[1568]: time="2026-03-02T12:51:45.926358475Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 2 12:51:45.926662 containerd[1568]: time="2026-03-02T12:51:45.926566904Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 2 12:51:45.926662 containerd[1568]: time="2026-03-02T12:51:45.926599977Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 2 12:51:45.927282 containerd[1568]: time="2026-03-02T12:51:45.927182795Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 2 12:51:45.927282 containerd[1568]: time="2026-03-02T12:51:45.927236985Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 2 12:51:45.927282 containerd[1568]: time="2026-03-02T12:51:45.927258295Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 2 12:51:45.927282 containerd[1568]: time="2026-03-02T12:51:45.927271620Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 2 12:51:45.927662 containerd[1568]: time="2026-03-02T12:51:45.927554649Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 2 12:51:45.928193 containerd[1568]: time="2026-03-02T12:51:45.928084097Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 2 12:51:45.928193 containerd[1568]: time="2026-03-02T12:51:45.928170388Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 2 12:51:45.928193 containerd[1568]: time="2026-03-02T12:51:45.928190566Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 2 12:51:45.928403 containerd[1568]: time="2026-03-02T12:51:45.928322853Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 2 12:51:45.929461 containerd[1568]: time="2026-03-02T12:51:45.929245746Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 2 12:51:45.929575 containerd[1568]: time="2026-03-02T12:51:45.929518936Z" level=info msg="metadata content store policy set" policy=shared Mar 2 12:51:45.940368 containerd[1568]: time="2026-03-02T12:51:45.940292352Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 2 12:51:45.940484 containerd[1568]: time="2026-03-02T12:51:45.940397919Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 2 12:51:45.940510 containerd[1568]: time="2026-03-02T12:51:45.940491624Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 2 12:51:45.940529 containerd[1568]: time="2026-03-02T12:51:45.940515238Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 2 12:51:45.940548 containerd[1568]: time="2026-03-02T12:51:45.940536298Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 2 12:51:45.940581 containerd[1568]: time="2026-03-02T12:51:45.940553309Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 2 12:51:45.940581 containerd[1568]: time="2026-03-02T12:51:45.940572726Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 2 12:51:45.940616 containerd[1568]: time="2026-03-02T12:51:45.940589296Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 2 12:51:45.940616 containerd[1568]: time="2026-03-02T12:51:45.940605336Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 2 12:51:45.940656 containerd[1568]: time="2026-03-02T12:51:45.940619122Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 2 12:51:45.940656 containerd[1568]: time="2026-03-02T12:51:45.940630794Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 2 12:51:45.940656 containerd[1568]: time="2026-03-02T12:51:45.940648928Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.940978232Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941016203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941042282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941057711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941127551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941149092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941165862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941179669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941194035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941208653Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941223851Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941332955Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941358042Z" level=info msg="Start snapshots syncer" Mar 2 12:51:45.941467 containerd[1568]: time="2026-03-02T12:51:45.941392496Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 2 12:51:45.941785 sshd_keygen[1554]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 12:51:45.942028 containerd[1568]: time="2026-03-02T12:51:45.941908720Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 2 12:51:45.942390 containerd[1568]: time="2026-03-02T12:51:45.942108513Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 2 12:51:45.954336 containerd[1568]: time="2026-03-02T12:51:45.954095682Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 2 12:51:45.954588 containerd[1568]: time="2026-03-02T12:51:45.954519012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 2 12:51:45.954588 containerd[1568]: time="2026-03-02T12:51:45.954585115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 2 12:51:45.954670 containerd[1568]: time="2026-03-02T12:51:45.954604041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 2 12:51:45.954670 containerd[1568]: time="2026-03-02T12:51:45.954618749Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 2 12:51:45.954670 containerd[1568]: time="2026-03-02T12:51:45.954635730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 2 12:51:45.954670 containerd[1568]: time="2026-03-02T12:51:45.954654315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 2 12:51:45.954806 containerd[1568]: time="2026-03-02T12:51:45.954671697Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 2 12:51:45.954828 containerd[1568]: time="2026-03-02T12:51:45.954799225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 2 12:51:45.954828 containerd[1568]: time="2026-03-02T12:51:45.954820846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 2 12:51:45.954860 containerd[1568]: time="2026-03-02T12:51:45.954838369Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 2 12:51:45.955382 containerd[1568]: time="2026-03-02T12:51:45.954884554Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 2 12:51:45.955382 containerd[1568]: time="2026-03-02T12:51:45.954910803Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 2 12:51:45.955382 containerd[1568]: time="2026-03-02T12:51:45.954923337Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 2 12:51:45.955382 containerd[1568]: time="2026-03-02T12:51:45.954935981Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 2 12:51:45.955382 containerd[1568]: time="2026-03-02T12:51:45.954946991Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 2 12:51:45.955382 containerd[1568]: time="2026-03-02T12:51:45.954960416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 2 12:51:45.955382 containerd[1568]: time="2026-03-02T12:51:45.954983319Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 2 12:51:45.955382 containerd[1568]: time="2026-03-02T12:51:45.955055263Z" level=info msg="runtime interface created" Mar 2 12:51:45.955382 containerd[1568]: time="2026-03-02T12:51:45.955067467Z" level=info msg="created NRI interface" Mar 2 12:51:45.955382 containerd[1568]: time="2026-03-02T12:51:45.955124803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 2 12:51:45.955382 containerd[1568]: time="2026-03-02T12:51:45.955148979Z" level=info msg="Connect containerd service" Mar 2 12:51:45.955382 containerd[1568]: time="2026-03-02T12:51:45.955178493Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 12:51:45.957789 containerd[1568]: time="2026-03-02T12:51:45.957260390Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 12:51:46.086973 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 12:51:46.100107 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 12:51:46.105787 systemd[1]: Started sshd@0-10.0.0.17:22-10.0.0.1:60670.service - OpenSSH per-connection server daemon (10.0.0.1:60670). Mar 2 12:51:46.134961 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 12:51:46.135344 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 12:51:46.141355 tar[1562]: linux-amd64/README.md Mar 2 12:51:46.143348 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 12:51:46.184770 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 12:51:46.213034 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 12:51:46.295413 systemd-networkd[1479]: eth0: Gained IPv6LL Mar 2 12:51:46.299168 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 12:51:46.391779 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 12:51:46.398082 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 12:51:46.559993 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 12:51:46.565092 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 12:51:46.567537 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 60670 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:51:46.570173 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:51:46.572529 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 12:51:46.578545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:51:46.593819 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 12:51:46.626804 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 12:51:46.632324 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 12:51:46.639329 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 12:51:46.658401 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 12:51:46.660532 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 12:51:46.667394 systemd-logind[1549]: New session 1 of user core. Mar 2 12:51:46.669211 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 12:51:46.671537 containerd[1568]: time="2026-03-02T12:51:46.670670992Z" level=info msg="Start subscribing containerd event" Mar 2 12:51:46.671537 containerd[1568]: time="2026-03-02T12:51:46.670957907Z" level=info msg="Start recovering state" Mar 2 12:51:46.671649 containerd[1568]: time="2026-03-02T12:51:46.671591730Z" level=info msg="Start event monitor" Mar 2 12:51:46.671875 containerd[1568]: time="2026-03-02T12:51:46.671811922Z" level=info msg="Start cni network conf syncer for default" Mar 2 12:51:46.672042 containerd[1568]: time="2026-03-02T12:51:46.671984934Z" level=info msg="Start streaming server" Mar 2 12:51:46.672316 containerd[1568]: time="2026-03-02T12:51:46.672250581Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 2 12:51:46.672316 containerd[1568]: time="2026-03-02T12:51:46.672265618Z" level=info msg="runtime interface starting up..." Mar 2 12:51:46.673079 containerd[1568]: time="2026-03-02T12:51:46.672273603Z" level=info msg="starting plugins..." Mar 2 12:51:46.673079 containerd[1568]: time="2026-03-02T12:51:46.673041026Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 2 12:51:46.673181 containerd[1568]: time="2026-03-02T12:51:46.672677422Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 12:51:46.674242 containerd[1568]: time="2026-03-02T12:51:46.673213228Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 12:51:46.675352 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 12:51:46.676168 containerd[1568]: time="2026-03-02T12:51:46.675513462Z" level=info msg="containerd successfully booted in 0.788141s" Mar 2 12:51:46.696126 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 12:51:46.704403 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 12:51:46.734376 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 12:51:46.738645 systemd-logind[1549]: New session c1 of user core. Mar 2 12:51:47.148666 systemd[1668]: Queued start job for default target default.target. Mar 2 12:51:47.165370 systemd[1668]: Created slice app.slice - User Application Slice. Mar 2 12:51:47.165505 systemd[1668]: Reached target paths.target - Paths. Mar 2 12:51:47.165594 systemd[1668]: Reached target timers.target - Timers. Mar 2 12:51:47.167964 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 12:51:47.275051 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 12:51:47.275235 systemd[1668]: Reached target sockets.target - Sockets. Mar 2 12:51:47.275291 systemd[1668]: Reached target basic.target - Basic System. Mar 2 12:51:47.275354 systemd[1668]: Reached target default.target - Main User Target. Mar 2 12:51:47.275404 systemd[1668]: Startup finished in 518ms. Mar 2 12:51:47.279490 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 12:51:47.366478 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 12:51:47.444773 systemd[1]: Started sshd@1-10.0.0.17:22-10.0.0.1:60678.service - OpenSSH per-connection server daemon (10.0.0.1:60678). Mar 2 12:51:47.533402 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 60678 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:51:47.535322 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:51:47.583105 systemd-logind[1549]: New session 2 of user core. Mar 2 12:51:50.934760 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 3368844204 wd_nsec: 3368843979 Mar 2 12:51:50.963575 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 12:51:50.994323 sshd[1682]: Connection closed by 10.0.0.1 port 60678 Mar 2 12:51:50.995843 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Mar 2 12:51:51.007042 systemd[1]: sshd@1-10.0.0.17:22-10.0.0.1:60678.service: Deactivated successfully. Mar 2 12:51:51.009759 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 12:51:51.011180 systemd-logind[1549]: Session 2 logged out. Waiting for processes to exit. Mar 2 12:51:51.015211 systemd[1]: Started sshd@2-10.0.0.17:22-10.0.0.1:55416.service - OpenSSH per-connection server daemon (10.0.0.1:55416). Mar 2 12:51:51.020828 systemd-logind[1549]: Removed session 2. Mar 2 12:51:51.097350 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 55416 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:51:51.099123 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:51:51.105340 systemd-logind[1549]: New session 3 of user core. Mar 2 12:51:51.116767 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 12:51:51.137387 sshd[1691]: Connection closed by 10.0.0.1 port 55416 Mar 2 12:51:51.137967 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Mar 2 12:51:51.143914 systemd[1]: sshd@2-10.0.0.17:22-10.0.0.1:55416.service: Deactivated successfully. Mar 2 12:51:51.147140 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 12:51:51.148315 systemd-logind[1549]: Session 3 logged out. Waiting for processes to exit. Mar 2 12:51:51.150867 systemd-logind[1549]: Removed session 3. Mar 2 12:51:53.805606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:51:53.806600 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 12:51:53.807545 systemd[1]: Startup finished in 7.984s (kernel) + 12.201s (initrd) + 14.046s (userspace) = 34.233s. Mar 2 12:51:53.835079 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:51:55.108386 kubelet[1705]: E0302 12:51:55.107841 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:51:55.113096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:51:55.113396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:51:55.114314 systemd[1]: kubelet.service: Consumed 7.426s CPU time, 256.9M memory peak. Mar 2 12:52:01.174073 systemd[1]: Started sshd@3-10.0.0.17:22-10.0.0.1:42736.service - OpenSSH per-connection server daemon (10.0.0.1:42736). Mar 2 12:52:01.269670 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 42736 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:52:01.271939 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:52:01.281316 systemd-logind[1549]: New session 4 of user core. Mar 2 12:52:01.288729 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 12:52:01.315974 sshd[1717]: Connection closed by 10.0.0.1 port 42736 Mar 2 12:52:01.316537 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Mar 2 12:52:01.328226 systemd[1]: sshd@3-10.0.0.17:22-10.0.0.1:42736.service: Deactivated successfully. Mar 2 12:52:01.330495 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 12:52:01.331719 systemd-logind[1549]: Session 4 logged out. Waiting for processes to exit. Mar 2 12:52:01.335263 systemd[1]: Started sshd@4-10.0.0.17:22-10.0.0.1:42752.service - OpenSSH per-connection server daemon (10.0.0.1:42752). Mar 2 12:52:01.337048 systemd-logind[1549]: Removed session 4. Mar 2 12:52:01.412074 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 42752 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:52:01.414559 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:52:01.423882 systemd-logind[1549]: New session 5 of user core. Mar 2 12:52:01.433717 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 12:52:01.456005 sshd[1726]: Connection closed by 10.0.0.1 port 42752 Mar 2 12:52:01.456313 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Mar 2 12:52:01.498828 systemd[1]: sshd@4-10.0.0.17:22-10.0.0.1:42752.service: Deactivated successfully. Mar 2 12:52:01.501312 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 12:52:01.502852 systemd-logind[1549]: Session 5 logged out. Waiting for processes to exit. Mar 2 12:52:01.506566 systemd[1]: Started sshd@5-10.0.0.17:22-10.0.0.1:42768.service - OpenSSH per-connection server daemon (10.0.0.1:42768). Mar 2 12:52:01.508376 systemd-logind[1549]: Removed session 5. Mar 2 12:52:01.597819 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 42768 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:52:01.599690 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:52:01.607333 systemd-logind[1549]: New session 6 of user core. Mar 2 12:52:01.621724 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 12:52:01.646912 sshd[1736]: Connection closed by 10.0.0.1 port 42768 Mar 2 12:52:01.647035 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Mar 2 12:52:01.668181 systemd[1]: sshd@5-10.0.0.17:22-10.0.0.1:42768.service: Deactivated successfully. Mar 2 12:52:01.670386 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 12:52:01.671409 systemd-logind[1549]: Session 6 logged out. Waiting for processes to exit. Mar 2 12:52:01.674520 systemd[1]: Started sshd@6-10.0.0.17:22-10.0.0.1:42784.service - OpenSSH per-connection server daemon (10.0.0.1:42784). Mar 2 12:52:01.676045 systemd-logind[1549]: Removed session 6. Mar 2 12:52:01.742223 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 42784 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:52:01.743655 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:52:01.750550 systemd-logind[1549]: New session 7 of user core. Mar 2 12:52:01.768677 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 12:52:01.799081 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 12:52:01.799661 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:52:01.828298 sudo[1746]: pam_unix(sudo:session): session closed for user root Mar 2 12:52:01.830639 sshd[1745]: Connection closed by 10.0.0.1 port 42784 Mar 2 12:52:01.831322 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Mar 2 12:52:01.847820 systemd[1]: sshd@6-10.0.0.17:22-10.0.0.1:42784.service: Deactivated successfully. Mar 2 12:52:01.850400 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 12:52:01.851994 systemd-logind[1549]: Session 7 logged out. Waiting for processes to exit. Mar 2 12:52:01.861535 systemd[1]: Started sshd@7-10.0.0.17:22-10.0.0.1:42800.service - OpenSSH per-connection server daemon (10.0.0.1:42800). Mar 2 12:52:01.863232 systemd-logind[1549]: Removed session 7. Mar 2 12:52:01.923498 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 42800 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:52:01.925238 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:52:01.931880 systemd-logind[1549]: New session 8 of user core. Mar 2 12:52:01.941642 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 12:52:01.961826 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 12:52:01.962306 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:52:01.970335 sudo[1757]: pam_unix(sudo:session): session closed for user root Mar 2 12:52:01.995942 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 2 12:52:01.996514 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:52:02.011941 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 12:52:02.086825 augenrules[1779]: No rules Mar 2 12:52:02.088398 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 12:52:02.088919 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 12:52:02.090690 sudo[1756]: pam_unix(sudo:session): session closed for user root Mar 2 12:52:02.092898 sshd[1755]: Connection closed by 10.0.0.1 port 42800 Mar 2 12:52:02.093578 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Mar 2 12:52:02.103695 systemd[1]: sshd@7-10.0.0.17:22-10.0.0.1:42800.service: Deactivated successfully. Mar 2 12:52:02.106162 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 12:52:02.107651 systemd-logind[1549]: Session 8 logged out. Waiting for processes to exit. Mar 2 12:52:02.110825 systemd[1]: Started sshd@8-10.0.0.17:22-10.0.0.1:42816.service - OpenSSH per-connection server daemon (10.0.0.1:42816). Mar 2 12:52:02.112576 systemd-logind[1549]: Removed session 8. Mar 2 12:52:02.233505 sshd[1788]: Accepted publickey for core from 10.0.0.1 port 42816 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:52:02.235941 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:52:02.246028 systemd-logind[1549]: New session 9 of user core. Mar 2 12:52:02.270783 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 12:52:02.299238 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 12:52:02.299648 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:52:04.507982 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 12:52:04.536179 (dockerd)[1813]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 12:52:05.397387 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 12:52:05.516742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:52:07.571816 dockerd[1813]: time="2026-03-02T12:52:07.571599760Z" level=info msg="Starting up" Mar 2 12:52:07.573552 dockerd[1813]: time="2026-03-02T12:52:07.573369092Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 2 12:52:07.606118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:52:07.622008 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:52:07.644168 dockerd[1813]: time="2026-03-02T12:52:07.643845157Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 2 12:52:07.734338 systemd[1]: var-lib-docker-metacopy\x2dcheck3804745807-merged.mount: Deactivated successfully. Mar 2 12:52:07.804991 dockerd[1813]: time="2026-03-02T12:52:07.804839082Z" level=info msg="Loading containers: start." Mar 2 12:52:07.821486 kernel: Initializing XFRM netlink socket Mar 2 12:52:08.510627 kubelet[1839]: E0302 12:52:08.510228 1839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:52:08.520143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:52:08.520549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:52:08.521371 systemd[1]: kubelet.service: Consumed 2.501s CPU time, 109.2M memory peak. Mar 2 12:52:09.114146 systemd-networkd[1479]: docker0: Link UP Mar 2 12:52:09.122394 dockerd[1813]: time="2026-03-02T12:52:09.122240572Z" level=info msg="Loading containers: done." Mar 2 12:52:09.182592 dockerd[1813]: time="2026-03-02T12:52:09.181709772Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 12:52:09.182592 dockerd[1813]: time="2026-03-02T12:52:09.182379522Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 2 12:52:09.182592 dockerd[1813]: time="2026-03-02T12:52:09.182808423Z" level=info msg="Initializing buildkit" Mar 2 12:52:09.300876 dockerd[1813]: time="2026-03-02T12:52:09.300603018Z" level=info msg="Completed buildkit initialization" Mar 2 12:52:09.317157 dockerd[1813]: time="2026-03-02T12:52:09.316379627Z" level=info msg="Daemon has completed initialization" Mar 2 12:52:09.317157 dockerd[1813]: time="2026-03-02T12:52:09.316830940Z" level=info msg="API listen on /run/docker.sock" Mar 2 12:52:09.318697 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 12:52:11.135156 containerd[1568]: time="2026-03-02T12:52:11.134514962Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 2 12:52:11.831722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771874386.mount: Deactivated successfully. Mar 2 12:52:14.557355 containerd[1568]: time="2026-03-02T12:52:14.557044422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:14.557355 containerd[1568]: time="2026-03-02T12:52:14.557473311Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 2 12:52:14.561329 containerd[1568]: time="2026-03-02T12:52:14.561212925Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:14.565369 containerd[1568]: time="2026-03-02T12:52:14.565267392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:14.568825 containerd[1568]: time="2026-03-02T12:52:14.568701317Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 3.433952534s" Mar 2 12:52:14.569139 containerd[1568]: time="2026-03-02T12:52:14.569016636Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 2 12:52:14.573767 containerd[1568]: time="2026-03-02T12:52:14.573740671Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 2 12:52:16.902135 containerd[1568]: time="2026-03-02T12:52:16.902022851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:16.903204 containerd[1568]: time="2026-03-02T12:52:16.902875758Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 2 12:52:16.904183 containerd[1568]: time="2026-03-02T12:52:16.904092595Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:16.908150 containerd[1568]: time="2026-03-02T12:52:16.908088886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:16.909487 containerd[1568]: time="2026-03-02T12:52:16.909396820Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 2.335511789s" Mar 2 12:52:16.909543 containerd[1568]: time="2026-03-02T12:52:16.909511504Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 2 12:52:16.911657 containerd[1568]: time="2026-03-02T12:52:16.911607417Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 2 12:52:18.666043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 12:52:18.669353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:52:19.140599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:52:19.157974 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:52:19.563616 containerd[1568]: time="2026-03-02T12:52:19.558126920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:19.565622 containerd[1568]: time="2026-03-02T12:52:19.564217200Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 2 12:52:19.569500 containerd[1568]: time="2026-03-02T12:52:19.568833718Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:19.573233 containerd[1568]: time="2026-03-02T12:52:19.573201062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:19.575049 containerd[1568]: time="2026-03-02T12:52:19.574998913Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 2.663362904s" Mar 2 12:52:19.575104 containerd[1568]: time="2026-03-02T12:52:19.575054145Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 2 12:52:19.577505 containerd[1568]: time="2026-03-02T12:52:19.577477391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 2 12:52:19.605403 kubelet[2124]: E0302 12:52:19.605294 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:52:19.609557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:52:19.609826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:52:19.610623 systemd[1]: kubelet.service: Consumed 717ms CPU time, 110.9M memory peak. Mar 2 12:52:21.228689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4060797099.mount: Deactivated successfully. Mar 2 12:52:21.919210 containerd[1568]: time="2026-03-02T12:52:21.919082220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:21.920773 containerd[1568]: time="2026-03-02T12:52:21.920296456Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 2 12:52:21.921920 containerd[1568]: time="2026-03-02T12:52:21.921867842Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:21.926332 containerd[1568]: time="2026-03-02T12:52:21.926219676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:21.927845 containerd[1568]: time="2026-03-02T12:52:21.927754806Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 2.350109484s" Mar 2 12:52:21.927845 containerd[1568]: time="2026-03-02T12:52:21.927794729Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 2 12:52:21.939360 containerd[1568]: time="2026-03-02T12:52:21.939285092Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 2 12:52:22.539645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134613700.mount: Deactivated successfully. Mar 2 12:52:24.421356 containerd[1568]: time="2026-03-02T12:52:24.420984200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:24.423758 containerd[1568]: time="2026-03-02T12:52:24.421951189Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 2 12:52:24.423758 containerd[1568]: time="2026-03-02T12:52:24.423237593Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:24.427186 containerd[1568]: time="2026-03-02T12:52:24.427130655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:24.431171 containerd[1568]: time="2026-03-02T12:52:24.429590374Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.490231104s" Mar 2 12:52:24.431171 containerd[1568]: time="2026-03-02T12:52:24.429636249Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 2 12:52:24.432536 containerd[1568]: time="2026-03-02T12:52:24.432410668Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 2 12:52:24.865531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount436635352.mount: Deactivated successfully. Mar 2 12:52:24.873781 containerd[1568]: time="2026-03-02T12:52:24.873608483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:24.874762 containerd[1568]: time="2026-03-02T12:52:24.874676692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 2 12:52:24.876545 containerd[1568]: time="2026-03-02T12:52:24.876519115Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:24.881390 containerd[1568]: time="2026-03-02T12:52:24.881303590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:24.882646 containerd[1568]: time="2026-03-02T12:52:24.882503621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 449.75716ms" Mar 2 12:52:24.882646 containerd[1568]: time="2026-03-02T12:52:24.882630926Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 2 12:52:24.883474 containerd[1568]: time="2026-03-02T12:52:24.883342395Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 2 12:52:25.876860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1826417961.mount: Deactivated successfully. Mar 2 12:52:27.129705 containerd[1568]: time="2026-03-02T12:52:27.129399968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:27.131781 containerd[1568]: time="2026-03-02T12:52:27.130228452Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 2 12:52:27.131830 containerd[1568]: time="2026-03-02T12:52:27.131788777Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:27.135581 containerd[1568]: time="2026-03-02T12:52:27.135515800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:27.136380 containerd[1568]: time="2026-03-02T12:52:27.136312218Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 2.252939718s" Mar 2 12:52:27.136380 containerd[1568]: time="2026-03-02T12:52:27.136356951Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 2 12:52:29.862549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 12:52:29.865731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:52:30.162853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:52:30.196133 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:52:30.353096 kubelet[2290]: E0302 12:52:30.352391 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:52:30.378900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:52:30.382848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:52:30.393308 systemd[1]: kubelet.service: Consumed 402ms CPU time, 110.2M memory peak. Mar 2 12:52:30.588165 update_engine[1558]: I20260302 12:52:30.585356 1558 update_attempter.cc:509] Updating boot flags... Mar 2 12:52:31.284242 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:52:31.284680 systemd[1]: kubelet.service: Consumed 402ms CPU time, 110.2M memory peak. Mar 2 12:52:31.287846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:52:31.329308 systemd[1]: Reload requested from client PID 2323 ('systemctl') (unit session-9.scope)... Mar 2 12:52:31.329322 systemd[1]: Reloading... Mar 2 12:52:31.416512 zram_generator::config[2366]: No configuration found. Mar 2 12:52:31.653732 systemd[1]: Reloading finished in 323 ms. Mar 2 12:52:31.731070 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 12:52:31.731240 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 12:52:31.731802 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:52:31.731870 systemd[1]: kubelet.service: Consumed 170ms CPU time, 98.1M memory peak. Mar 2 12:52:31.733757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:52:31.982035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:52:31.999041 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 12:52:32.062209 kubelet[2414]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 12:52:32.062209 kubelet[2414]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:52:32.062702 kubelet[2414]: I0302 12:52:32.062513 2414 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 12:52:32.765248 kubelet[2414]: I0302 12:52:32.764992 2414 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 2 12:52:32.765248 kubelet[2414]: I0302 12:52:32.765083 2414 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 12:52:32.765248 kubelet[2414]: I0302 12:52:32.765289 2414 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 12:52:32.765248 kubelet[2414]: I0302 12:52:32.765315 2414 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 12:52:32.766411 kubelet[2414]: I0302 12:52:32.765701 2414 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 12:52:32.774787 kubelet[2414]: E0302 12:52:32.774718 2414 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 12:52:32.775166 kubelet[2414]: I0302 12:52:32.775088 2414 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 12:52:32.784245 kubelet[2414]: I0302 12:52:32.784164 2414 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 2 12:52:32.799308 kubelet[2414]: I0302 12:52:32.799218 2414 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 12:52:32.801260 kubelet[2414]: I0302 12:52:32.801143 2414 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 12:52:32.801594 kubelet[2414]: I0302 12:52:32.801200 2414 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 12:52:32.802012 kubelet[2414]: I0302 12:52:32.801679 2414 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 12:52:32.802012 kubelet[2414]: I0302 12:52:32.801698 2414 container_manager_linux.go:306] "Creating device plugin manager" Mar 2 12:52:32.802012 kubelet[2414]: I0302 12:52:32.801949 2414 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 12:52:32.804874 kubelet[2414]: I0302 12:52:32.804782 2414 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:52:32.805352 kubelet[2414]: I0302 12:52:32.805265 2414 kubelet.go:475] "Attempting to sync node with API server" Mar 2 12:52:32.805352 kubelet[2414]: I0302 12:52:32.805305 2414 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 12:52:32.805571 kubelet[2414]: I0302 12:52:32.805513 2414 kubelet.go:387] "Adding apiserver pod source" Mar 2 12:52:32.805679 kubelet[2414]: I0302 12:52:32.805621 2414 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 12:52:32.807165 kubelet[2414]: E0302 12:52:32.807084 2414 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 12:52:32.807376 kubelet[2414]: E0302 12:52:32.807306 2414 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 12:52:32.809935 kubelet[2414]: I0302 12:52:32.809849 2414 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 2 12:52:32.811522 kubelet[2414]: I0302 12:52:32.810768 2414 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 12:52:32.811522 kubelet[2414]: I0302 12:52:32.810793 2414 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 12:52:32.811522 kubelet[2414]: W0302 12:52:32.811023 2414 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 12:52:32.816575 kubelet[2414]: I0302 12:52:32.816543 2414 server.go:1262] "Started kubelet" Mar 2 12:52:32.820483 kubelet[2414]: I0302 12:52:32.818881 2414 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 12:52:32.820483 kubelet[2414]: I0302 12:52:32.818932 2414 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 12:52:32.820483 kubelet[2414]: I0302 12:52:32.818998 2414 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 12:52:32.820483 kubelet[2414]: I0302 12:52:32.819306 2414 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 12:52:32.820483 kubelet[2414]: I0302 12:52:32.819571 2414 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 12:52:32.823102 kubelet[2414]: I0302 12:52:32.821729 2414 server.go:310] "Adding debug handlers to kubelet server" Mar 2 12:52:32.823102 kubelet[2414]: I0302 12:52:32.822845 2414 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 12:52:32.823102 kubelet[2414]: E0302 12:52:32.821503 2414 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.17:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.17:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899074c57a6fadb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 12:52:32.816470747 +0000 UTC m=+0.811388376,LastTimestamp:2026-03-02 12:52:32.816470747 +0000 UTC m=+0.811388376,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 12:52:32.824084 kubelet[2414]: E0302 12:52:32.824053 2414 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 12:52:32.824160 kubelet[2414]: I0302 12:52:32.824151 2414 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 2 12:52:32.824563 kubelet[2414]: I0302 12:52:32.824494 2414 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 12:52:32.824676 kubelet[2414]: I0302 12:52:32.824604 2414 reconciler.go:29] "Reconciler: start to sync state" Mar 2 12:52:32.824963 kubelet[2414]: E0302 12:52:32.824936 2414 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 12:52:32.826139 kubelet[2414]: E0302 12:52:32.825924 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="200ms" Mar 2 12:52:32.826663 kubelet[2414]: E0302 12:52:32.826601 2414 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 12:52:32.827699 kubelet[2414]: I0302 12:52:32.827582 2414 factory.go:223] Registration of the systemd container factory successfully Mar 2 12:52:32.827802 kubelet[2414]: I0302 12:52:32.827751 2414 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 12:52:32.829823 kubelet[2414]: I0302 12:52:32.829765 2414 factory.go:223] Registration of the containerd container factory successfully Mar 2 12:52:32.858581 kubelet[2414]: I0302 12:52:32.858524 2414 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 12:52:32.861242 kubelet[2414]: I0302 12:52:32.861215 2414 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 12:52:32.861459 kubelet[2414]: I0302 12:52:32.861339 2414 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 2 12:52:32.861534 kubelet[2414]: I0302 12:52:32.861492 2414 kubelet.go:2428] "Starting kubelet main sync loop" Mar 2 12:52:32.862564 kubelet[2414]: E0302 12:52:32.861533 2414 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 12:52:32.862841 kubelet[2414]: E0302 12:52:32.862821 2414 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 12:52:32.863250 kubelet[2414]: I0302 12:52:32.863234 2414 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 12:52:32.863334 kubelet[2414]: I0302 12:52:32.863322 2414 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 12:52:32.863413 kubelet[2414]: I0302 12:52:32.863402 2414 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:52:32.866804 kubelet[2414]: I0302 12:52:32.866782 2414 policy_none.go:49] "None policy: Start" Mar 2 12:52:32.867017 kubelet[2414]: I0302 12:52:32.866995 2414 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 12:52:32.867224 kubelet[2414]: I0302 12:52:32.867175 2414 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 12:52:32.868985 kubelet[2414]: I0302 12:52:32.868949 2414 policy_none.go:47] "Start" Mar 2 12:52:32.876092 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 12:52:32.902269 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 12:52:32.924155 kubelet[2414]: E0302 12:52:32.924130 2414 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 12:52:32.926534 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 12:52:32.931138 kubelet[2414]: E0302 12:52:32.931042 2414 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 12:52:32.931581 kubelet[2414]: I0302 12:52:32.931543 2414 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 12:52:32.931736 kubelet[2414]: I0302 12:52:32.931591 2414 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 12:52:32.933679 kubelet[2414]: I0302 12:52:32.932490 2414 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 12:52:32.935897 kubelet[2414]: E0302 12:52:32.935864 2414 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 12:52:32.935994 kubelet[2414]: E0302 12:52:32.935982 2414 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 12:52:32.978532 systemd[1]: Created slice kubepods-burstable-poda385ef26754459ee6e462b7f5bb6ab5f.slice - libcontainer container kubepods-burstable-poda385ef26754459ee6e462b7f5bb6ab5f.slice. Mar 2 12:52:33.009800 kubelet[2414]: E0302 12:52:33.009611 2414 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:52:33.011509 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 2 12:52:33.016287 kubelet[2414]: E0302 12:52:33.016090 2414 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:52:33.018510 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 2 12:52:33.021857 kubelet[2414]: E0302 12:52:33.021804 2414 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:52:33.026515 kubelet[2414]: I0302 12:52:33.026380 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 2 12:52:33.026623 kubelet[2414]: I0302 12:52:33.026522 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:33.026623 kubelet[2414]: I0302 12:52:33.026549 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:33.026623 kubelet[2414]: I0302 12:52:33.026570 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a385ef26754459ee6e462b7f5bb6ab5f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a385ef26754459ee6e462b7f5bb6ab5f\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:52:33.026755 kubelet[2414]: I0302 12:52:33.026666 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a385ef26754459ee6e462b7f5bb6ab5f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a385ef26754459ee6e462b7f5bb6ab5f\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:52:33.026755 kubelet[2414]: I0302 12:52:33.026693 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a385ef26754459ee6e462b7f5bb6ab5f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a385ef26754459ee6e462b7f5bb6ab5f\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:52:33.026755 kubelet[2414]: I0302 12:52:33.026714 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:33.026755 kubelet[2414]: I0302 12:52:33.026732 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:33.026755 kubelet[2414]: I0302 12:52:33.026750 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:33.027210 kubelet[2414]: E0302 12:52:33.027060 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="400ms" Mar 2 12:52:33.034047 kubelet[2414]: I0302 12:52:33.033910 2414 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:52:33.034586 kubelet[2414]: E0302 12:52:33.034409 2414 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Mar 2 12:52:33.236865 kubelet[2414]: I0302 12:52:33.236743 2414 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:52:33.237689 kubelet[2414]: E0302 12:52:33.237585 2414 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Mar 2 12:52:33.314525 kubelet[2414]: E0302 12:52:33.314194 2414 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:33.316028 containerd[1568]: time="2026-03-02T12:52:33.315850624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a385ef26754459ee6e462b7f5bb6ab5f,Namespace:kube-system,Attempt:0,}" Mar 2 12:52:33.319768 kubelet[2414]: E0302 12:52:33.319578 2414 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:33.320140 containerd[1568]: time="2026-03-02T12:52:33.320061029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 2 12:52:33.325131 kubelet[2414]: E0302 12:52:33.325073 2414 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:33.325768 containerd[1568]: time="2026-03-02T12:52:33.325587732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 2 12:52:33.428265 kubelet[2414]: E0302 12:52:33.428076 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="800ms" Mar 2 12:52:33.639837 kubelet[2414]: I0302 12:52:33.639791 2414 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:52:33.640504 kubelet[2414]: E0302 12:52:33.640300 2414 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Mar 2 12:52:33.973619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount868435199.mount: Deactivated successfully. Mar 2 12:52:33.980668 containerd[1568]: time="2026-03-02T12:52:33.980571728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:52:33.987812 containerd[1568]: time="2026-03-02T12:52:33.987715817Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 2 12:52:33.989109 containerd[1568]: time="2026-03-02T12:52:33.989043593Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:52:33.990568 containerd[1568]: time="2026-03-02T12:52:33.990416718Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:52:33.991951 containerd[1568]: time="2026-03-02T12:52:33.991903454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:52:33.992694 containerd[1568]: time="2026-03-02T12:52:33.992510054Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 2 12:52:33.996479 containerd[1568]: time="2026-03-02T12:52:33.994697275Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 2 12:52:33.997947 containerd[1568]: time="2026-03-02T12:52:33.997854311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:52:34.001487 containerd[1568]: time="2026-03-02T12:52:34.001338347Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 678.668881ms" Mar 2 12:52:34.002314 containerd[1568]: time="2026-03-02T12:52:34.002277178Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 683.03027ms" Mar 2 12:52:34.003534 containerd[1568]: time="2026-03-02T12:52:34.003342093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 675.410212ms" Mar 2 12:52:34.056488 containerd[1568]: time="2026-03-02T12:52:34.056250938Z" level=info msg="connecting to shim 49e6a04d06bbd6427b2601fbd640665f72c8b7ccdb6228baebdc76cc6cf224e9" address="unix:///run/containerd/s/e567645b14c3dbc92dbe6873afdd39a175cfb93bddc530bb5b42a85f0d61d2ba" namespace=k8s.io protocol=ttrpc version=3 Mar 2 12:52:34.061741 containerd[1568]: time="2026-03-02T12:52:34.061602449Z" level=info msg="connecting to shim fb08821b4b5786e9424348e714cf9f5e6feb99d1f8ffc4804d198c06420aec4c" address="unix:///run/containerd/s/18f9d74f9407104bb638b12e1ed92542db609168ecbd25629add237d0822b15b" namespace=k8s.io protocol=ttrpc version=3 Mar 2 12:52:34.065657 containerd[1568]: time="2026-03-02T12:52:34.065499314Z" level=info msg="connecting to shim 32534b5acdb498447e69e9bdcaa305fca8299a3d803a2ade84d8d40a3adf6387" address="unix:///run/containerd/s/b168b8a5c5ef50d116358d42578b40da5a6007cf55f5762cb27367af5455611f" namespace=k8s.io protocol=ttrpc version=3 Mar 2 12:52:34.092677 kubelet[2414]: E0302 12:52:34.092574 2414 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 12:52:34.094718 systemd[1]: Started cri-containerd-49e6a04d06bbd6427b2601fbd640665f72c8b7ccdb6228baebdc76cc6cf224e9.scope - libcontainer container 49e6a04d06bbd6427b2601fbd640665f72c8b7ccdb6228baebdc76cc6cf224e9. Mar 2 12:52:34.121745 systemd[1]: Started cri-containerd-fb08821b4b5786e9424348e714cf9f5e6feb99d1f8ffc4804d198c06420aec4c.scope - libcontainer container fb08821b4b5786e9424348e714cf9f5e6feb99d1f8ffc4804d198c06420aec4c. Mar 2 12:52:34.168692 systemd[1]: Started cri-containerd-32534b5acdb498447e69e9bdcaa305fca8299a3d803a2ade84d8d40a3adf6387.scope - libcontainer container 32534b5acdb498447e69e9bdcaa305fca8299a3d803a2ade84d8d40a3adf6387. Mar 2 12:52:34.231501 kubelet[2414]: E0302 12:52:34.229778 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="1.6s" Mar 2 12:52:34.262334 containerd[1568]: time="2026-03-02T12:52:34.262295416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a385ef26754459ee6e462b7f5bb6ab5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"32534b5acdb498447e69e9bdcaa305fca8299a3d803a2ade84d8d40a3adf6387\"" Mar 2 12:52:34.264386 kubelet[2414]: E0302 12:52:34.264295 2414 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:34.267761 containerd[1568]: time="2026-03-02T12:52:34.267728643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"49e6a04d06bbd6427b2601fbd640665f72c8b7ccdb6228baebdc76cc6cf224e9\"" Mar 2 12:52:34.268999 kubelet[2414]: E0302 12:52:34.268899 2414 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:34.272580 containerd[1568]: time="2026-03-02T12:52:34.272492822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb08821b4b5786e9424348e714cf9f5e6feb99d1f8ffc4804d198c06420aec4c\"" Mar 2 12:52:34.272580 containerd[1568]: time="2026-03-02T12:52:34.272518499Z" level=info msg="CreateContainer within sandbox \"32534b5acdb498447e69e9bdcaa305fca8299a3d803a2ade84d8d40a3adf6387\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 12:52:34.273929 kubelet[2414]: E0302 12:52:34.273885 2414 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:34.274716 containerd[1568]: time="2026-03-02T12:52:34.274578490Z" level=info msg="CreateContainer within sandbox \"49e6a04d06bbd6427b2601fbd640665f72c8b7ccdb6228baebdc76cc6cf224e9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 12:52:34.279135 containerd[1568]: time="2026-03-02T12:52:34.279037381Z" level=info msg="CreateContainer within sandbox \"fb08821b4b5786e9424348e714cf9f5e6feb99d1f8ffc4804d198c06420aec4c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 12:52:34.288071 containerd[1568]: time="2026-03-02T12:52:34.287977815Z" level=info msg="Container 8da916fc7e2155746521816603b9668b065f19a1bec32a18620f37ae8b193ea6: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:52:34.294615 containerd[1568]: time="2026-03-02T12:52:34.294571195Z" level=info msg="Container 1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:52:34.300179 containerd[1568]: time="2026-03-02T12:52:34.300139451Z" level=info msg="CreateContainer within sandbox \"32534b5acdb498447e69e9bdcaa305fca8299a3d803a2ade84d8d40a3adf6387\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8da916fc7e2155746521816603b9668b065f19a1bec32a18620f37ae8b193ea6\"" Mar 2 12:52:34.300262 containerd[1568]: time="2026-03-02T12:52:34.300213949Z" level=info msg="Container fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:52:34.301050 containerd[1568]: time="2026-03-02T12:52:34.300936754Z" level=info msg="StartContainer for \"8da916fc7e2155746521816603b9668b065f19a1bec32a18620f37ae8b193ea6\"" Mar 2 12:52:34.302342 containerd[1568]: time="2026-03-02T12:52:34.302209309Z" level=info msg="connecting to shim 8da916fc7e2155746521816603b9668b065f19a1bec32a18620f37ae8b193ea6" address="unix:///run/containerd/s/b168b8a5c5ef50d116358d42578b40da5a6007cf55f5762cb27367af5455611f" protocol=ttrpc version=3 Mar 2 12:52:34.303672 kubelet[2414]: E0302 12:52:34.303583 2414 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 12:52:34.306981 containerd[1568]: time="2026-03-02T12:52:34.306903529Z" level=info msg="CreateContainer within sandbox \"49e6a04d06bbd6427b2601fbd640665f72c8b7ccdb6228baebdc76cc6cf224e9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5\"" Mar 2 12:52:34.308591 containerd[1568]: time="2026-03-02T12:52:34.308531124Z" level=info msg="StartContainer for \"1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5\"" Mar 2 12:52:34.310576 containerd[1568]: time="2026-03-02T12:52:34.310409808Z" level=info msg="connecting to shim 1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5" address="unix:///run/containerd/s/e567645b14c3dbc92dbe6873afdd39a175cfb93bddc530bb5b42a85f0d61d2ba" protocol=ttrpc version=3 Mar 2 12:52:34.314867 containerd[1568]: time="2026-03-02T12:52:34.314825909Z" level=info msg="CreateContainer within sandbox \"fb08821b4b5786e9424348e714cf9f5e6feb99d1f8ffc4804d198c06420aec4c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac\"" Mar 2 12:52:34.316405 containerd[1568]: time="2026-03-02T12:52:34.315793602Z" level=info msg="StartContainer for \"fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac\"" Mar 2 12:52:34.318397 containerd[1568]: time="2026-03-02T12:52:34.318310322Z" level=info msg="connecting to shim fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac" address="unix:///run/containerd/s/18f9d74f9407104bb638b12e1ed92542db609168ecbd25629add237d0822b15b" protocol=ttrpc version=3 Mar 2 12:52:34.328748 systemd[1]: Started cri-containerd-8da916fc7e2155746521816603b9668b065f19a1bec32a18620f37ae8b193ea6.scope - libcontainer container 8da916fc7e2155746521816603b9668b065f19a1bec32a18620f37ae8b193ea6. Mar 2 12:52:34.331141 kubelet[2414]: E0302 12:52:34.331042 2414 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 12:52:34.337159 kubelet[2414]: E0302 12:52:34.337109 2414 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 12:52:34.344757 systemd[1]: Started cri-containerd-1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5.scope - libcontainer container 1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5. Mar 2 12:52:34.354674 systemd[1]: Started cri-containerd-fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac.scope - libcontainer container fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac. Mar 2 12:52:34.417262 containerd[1568]: time="2026-03-02T12:52:34.417166875Z" level=info msg="StartContainer for \"8da916fc7e2155746521816603b9668b065f19a1bec32a18620f37ae8b193ea6\" returns successfully" Mar 2 12:52:34.446054 kubelet[2414]: I0302 12:52:34.445911 2414 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:52:34.448186 kubelet[2414]: E0302 12:52:34.448079 2414 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Mar 2 12:52:34.448798 containerd[1568]: time="2026-03-02T12:52:34.448697887Z" level=info msg="StartContainer for \"1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5\" returns successfully" Mar 2 12:52:34.486930 containerd[1568]: time="2026-03-02T12:52:34.486733888Z" level=info msg="StartContainer for \"fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac\" returns successfully" Mar 2 12:52:34.879739 kubelet[2414]: E0302 12:52:34.879687 2414 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:52:34.880468 kubelet[2414]: E0302 12:52:34.879830 2414 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:34.889693 kubelet[2414]: E0302 12:52:34.889606 2414 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:52:34.891465 kubelet[2414]: E0302 12:52:34.889779 2414 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:34.891465 kubelet[2414]: E0302 12:52:34.891172 2414 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:52:34.891465 kubelet[2414]: E0302 12:52:34.891399 2414 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:35.921556 kubelet[2414]: E0302 12:52:35.921335 2414 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:52:35.923484 kubelet[2414]: E0302 12:52:35.922979 2414 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:35.923484 kubelet[2414]: E0302 12:52:35.923107 2414 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:52:35.923484 kubelet[2414]: E0302 12:52:35.923384 2414 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:36.294214 kubelet[2414]: I0302 12:52:36.221076 2414 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:52:37.623469 kubelet[2414]: E0302 12:52:37.622971 2414 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 12:52:37.740026 kubelet[2414]: I0302 12:52:37.739888 2414 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 12:52:37.811029 kubelet[2414]: I0302 12:52:37.810934 2414 apiserver.go:52] "Watching apiserver" Mar 2 12:52:37.824896 kubelet[2414]: I0302 12:52:37.824667 2414 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 12:52:37.826534 kubelet[2414]: I0302 12:52:37.825822 2414 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:38.011852 kubelet[2414]: E0302 12:52:38.003589 2414 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:38.011852 kubelet[2414]: I0302 12:52:38.003810 2414 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:52:38.027201 kubelet[2414]: E0302 12:52:38.020739 2414 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 2 12:52:38.027201 kubelet[2414]: I0302 12:52:38.020873 2414 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:52:38.047539 kubelet[2414]: E0302 12:52:38.040861 2414 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 12:52:41.492720 systemd[1]: Reload requested from client PID 2704 ('systemctl') (unit session-9.scope)... Mar 2 12:52:41.493483 systemd[1]: Reloading... Mar 2 12:52:41.658636 zram_generator::config[2747]: No configuration found. Mar 2 12:52:41.923205 systemd[1]: Reloading finished in 429 ms. Mar 2 12:52:41.960637 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:52:41.969778 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 12:52:41.970087 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:52:41.970179 systemd[1]: kubelet.service: Consumed 2.034s CPU time, 125.7M memory peak. Mar 2 12:52:41.973650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:52:42.218280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:52:42.235174 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 12:52:42.329962 kubelet[2792]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 12:52:42.329962 kubelet[2792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:52:42.329962 kubelet[2792]: I0302 12:52:42.329839 2792 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 12:52:42.352366 kubelet[2792]: I0302 12:52:42.352302 2792 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 2 12:52:42.352366 kubelet[2792]: I0302 12:52:42.352352 2792 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 12:52:42.352576 kubelet[2792]: I0302 12:52:42.352390 2792 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 12:52:42.352576 kubelet[2792]: I0302 12:52:42.352399 2792 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 12:52:42.352880 kubelet[2792]: I0302 12:52:42.352829 2792 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 12:52:42.354741 kubelet[2792]: I0302 12:52:42.354592 2792 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 12:52:42.358047 kubelet[2792]: I0302 12:52:42.357732 2792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 12:52:42.365416 kubelet[2792]: I0302 12:52:42.365347 2792 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 2 12:52:42.374083 kubelet[2792]: I0302 12:52:42.374009 2792 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 12:52:42.374479 kubelet[2792]: I0302 12:52:42.374378 2792 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 12:52:42.374801 kubelet[2792]: I0302 12:52:42.374521 2792 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 12:52:42.374801 kubelet[2792]: I0302 12:52:42.374754 2792 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 12:52:42.374801 kubelet[2792]: I0302 12:52:42.374770 2792 container_manager_linux.go:306] "Creating device plugin manager" Mar 2 12:52:42.375278 kubelet[2792]: I0302 12:52:42.374811 2792 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 12:52:42.375278 kubelet[2792]: I0302 12:52:42.375062 2792 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:52:42.375346 kubelet[2792]: I0302 12:52:42.375282 2792 kubelet.go:475] "Attempting to sync node with API server" Mar 2 12:52:42.375346 kubelet[2792]: I0302 12:52:42.375299 2792 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 12:52:42.375346 kubelet[2792]: I0302 12:52:42.375327 2792 kubelet.go:387] "Adding apiserver pod source" Mar 2 12:52:42.375346 kubelet[2792]: I0302 12:52:42.375340 2792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 12:52:42.377081 kubelet[2792]: I0302 12:52:42.377034 2792 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 2 12:52:42.377951 kubelet[2792]: I0302 12:52:42.377819 2792 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 12:52:42.377951 kubelet[2792]: I0302 12:52:42.377888 2792 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 12:52:42.388562 kubelet[2792]: I0302 12:52:42.387882 2792 server.go:1262] "Started kubelet" Mar 2 12:52:42.391785 kubelet[2792]: I0302 12:52:42.391718 2792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 12:52:42.404699 kubelet[2792]: I0302 12:52:42.404548 2792 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 12:52:42.404699 kubelet[2792]: I0302 12:52:42.404566 2792 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 12:52:42.404869 kubelet[2792]: I0302 12:52:42.404729 2792 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 12:52:42.406040 kubelet[2792]: I0302 12:52:42.405716 2792 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 12:52:42.406567 kubelet[2792]: I0302 12:52:42.406400 2792 server.go:310] "Adding debug handlers to kubelet server" Mar 2 12:52:42.411699 kubelet[2792]: I0302 12:52:42.411656 2792 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 12:52:42.412869 kubelet[2792]: I0302 12:52:42.412753 2792 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 2 12:52:42.415189 kubelet[2792]: I0302 12:52:42.415089 2792 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 12:52:42.415579 kubelet[2792]: I0302 12:52:42.415299 2792 reconciler.go:29] "Reconciler: start to sync state" Mar 2 12:52:42.417141 kubelet[2792]: I0302 12:52:42.416368 2792 factory.go:223] Registration of the systemd container factory successfully Mar 2 12:52:42.417141 kubelet[2792]: I0302 12:52:42.416596 2792 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 12:52:42.419559 kubelet[2792]: E0302 12:52:42.419353 2792 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 12:52:42.420301 kubelet[2792]: I0302 12:52:42.420236 2792 factory.go:223] Registration of the containerd container factory successfully Mar 2 12:52:42.441838 kubelet[2792]: I0302 12:52:42.441600 2792 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 12:52:42.444350 kubelet[2792]: I0302 12:52:42.443878 2792 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 12:52:42.444350 kubelet[2792]: I0302 12:52:42.443896 2792 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 2 12:52:42.444350 kubelet[2792]: I0302 12:52:42.443918 2792 kubelet.go:2428] "Starting kubelet main sync loop" Mar 2 12:52:42.444350 kubelet[2792]: E0302 12:52:42.444033 2792 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 12:52:42.466502 sudo[2831]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 2 12:52:42.466950 sudo[2831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 2 12:52:42.491647 kubelet[2792]: I0302 12:52:42.491374 2792 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 12:52:42.491647 kubelet[2792]: I0302 12:52:42.491497 2792 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 12:52:42.491647 kubelet[2792]: I0302 12:52:42.491522 2792 state_mem.go:36] "Initialized new in-memory state store" Mar 2 12:52:42.491840 kubelet[2792]: I0302 12:52:42.491784 2792 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 12:52:42.491840 kubelet[2792]: I0302 12:52:42.491798 2792 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 12:52:42.491840 kubelet[2792]: I0302 12:52:42.491818 2792 policy_none.go:49] "None policy: Start" Mar 2 12:52:42.491840 kubelet[2792]: I0302 12:52:42.491834 2792 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 12:52:42.491961 kubelet[2792]: I0302 12:52:42.491853 2792 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 12:52:42.492002 kubelet[2792]: I0302 12:52:42.491977 2792 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 2 12:52:42.492002 kubelet[2792]: I0302 12:52:42.491991 2792 policy_none.go:47] "Start" Mar 2 12:52:42.501015 kubelet[2792]: E0302 12:52:42.500973 2792 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 12:52:42.501200 kubelet[2792]: I0302 12:52:42.501150 2792 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 12:52:42.501200 kubelet[2792]: I0302 12:52:42.501180 2792 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 12:52:42.501560 kubelet[2792]: I0302 12:52:42.501373 2792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 12:52:42.505097 kubelet[2792]: E0302 12:52:42.505011 2792 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 12:52:42.545542 kubelet[2792]: I0302 12:52:42.545310 2792 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:52:42.545542 kubelet[2792]: I0302 12:52:42.545369 2792 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:42.546259 kubelet[2792]: I0302 12:52:42.545902 2792 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:52:42.618935 kubelet[2792]: I0302 12:52:42.618861 2792 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 12:52:42.641485 kubelet[2792]: I0302 12:52:42.641121 2792 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 2 12:52:42.641485 kubelet[2792]: I0302 12:52:42.641220 2792 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 12:52:42.715730 kubelet[2792]: I0302 12:52:42.715657 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a385ef26754459ee6e462b7f5bb6ab5f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a385ef26754459ee6e462b7f5bb6ab5f\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:52:42.715730 kubelet[2792]: I0302 12:52:42.715701 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a385ef26754459ee6e462b7f5bb6ab5f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a385ef26754459ee6e462b7f5bb6ab5f\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:52:42.715730 kubelet[2792]: I0302 12:52:42.715723 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:42.715730 kubelet[2792]: I0302 12:52:42.715747 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:42.716009 kubelet[2792]: I0302 12:52:42.715762 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:42.716009 kubelet[2792]: I0302 12:52:42.715775 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a385ef26754459ee6e462b7f5bb6ab5f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a385ef26754459ee6e462b7f5bb6ab5f\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:52:42.716009 kubelet[2792]: I0302 12:52:42.715786 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:42.716009 kubelet[2792]: I0302 12:52:42.715799 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:42.716009 kubelet[2792]: I0302 12:52:42.715816 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 2 12:52:42.875509 kubelet[2792]: E0302 12:52:42.873717 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:42.875509 kubelet[2792]: E0302 12:52:42.873879 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:42.881470 kubelet[2792]: E0302 12:52:42.881118 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:43.284151 sudo[2831]: pam_unix(sudo:session): session closed for user root Mar 2 12:52:43.395302 kubelet[2792]: I0302 12:52:43.394786 2792 apiserver.go:52] "Watching apiserver" Mar 2 12:52:43.417984 kubelet[2792]: I0302 12:52:43.417376 2792 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 12:52:43.505678 kubelet[2792]: I0302 12:52:43.505306 2792 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:52:43.505678 kubelet[2792]: I0302 12:52:43.505547 2792 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:52:43.566323 kubelet[2792]: I0302 12:52:43.539860 2792 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:43.685127 kubelet[2792]: E0302 12:52:43.681563 2792 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 2 12:52:43.685127 kubelet[2792]: E0302 12:52:43.681930 2792 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 12:52:43.685127 kubelet[2792]: E0302 12:52:43.681978 2792 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:52:43.685127 kubelet[2792]: E0302 12:52:43.682933 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:43.715699 kubelet[2792]: E0302 12:52:43.696050 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:43.715699 kubelet[2792]: E0302 12:52:43.710347 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:43.827785 kubelet[2792]: I0302 12:52:43.820981 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8208995350000001 podStartE2EDuration="1.820899535s" podCreationTimestamp="2026-03-02 12:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:52:43.818186932 +0000 UTC m=+1.577603710" watchObservedRunningTime="2026-03-02 12:52:43.820899535 +0000 UTC m=+1.580316283" Mar 2 12:52:44.379936 kubelet[2792]: I0302 12:52:44.377934 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.377771919 podStartE2EDuration="2.377771919s" podCreationTimestamp="2026-03-02 12:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:52:44.216156433 +0000 UTC m=+1.975573191" watchObservedRunningTime="2026-03-02 12:52:44.377771919 +0000 UTC m=+2.137188667" Mar 2 12:52:44.609811 kubelet[2792]: I0302 12:52:44.608737 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.590107806 podStartE2EDuration="2.590107806s" podCreationTimestamp="2026-03-02 12:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:52:44.385566475 +0000 UTC m=+2.144983222" watchObservedRunningTime="2026-03-02 12:52:44.590107806 +0000 UTC m=+2.349524555" Mar 2 12:52:44.654937 kubelet[2792]: E0302 12:52:44.613224 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:44.654937 kubelet[2792]: E0302 12:52:44.613595 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:44.654937 kubelet[2792]: E0302 12:52:44.632851 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:45.894975 kubelet[2792]: E0302 12:52:45.894585 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:46.886825 kubelet[2792]: E0302 12:52:46.886536 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:47.004357 kubelet[2792]: I0302 12:52:47.003982 2792 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 12:52:47.011168 kubelet[2792]: I0302 12:52:47.005345 2792 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 12:52:47.011554 containerd[1568]: time="2026-03-02T12:52:47.004885357Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 12:52:47.325586 kubelet[2792]: I0302 12:52:47.325089 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c465411-8f2b-43b9-b6de-6fed32b4871d-kube-proxy\") pod \"kube-proxy-wl8g5\" (UID: \"4c465411-8f2b-43b9-b6de-6fed32b4871d\") " pod="kube-system/kube-proxy-wl8g5" Mar 2 12:52:47.325586 kubelet[2792]: I0302 12:52:47.325163 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c465411-8f2b-43b9-b6de-6fed32b4871d-xtables-lock\") pod \"kube-proxy-wl8g5\" (UID: \"4c465411-8f2b-43b9-b6de-6fed32b4871d\") " pod="kube-system/kube-proxy-wl8g5" Mar 2 12:52:47.325586 kubelet[2792]: I0302 12:52:47.325180 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c465411-8f2b-43b9-b6de-6fed32b4871d-lib-modules\") pod \"kube-proxy-wl8g5\" (UID: \"4c465411-8f2b-43b9-b6de-6fed32b4871d\") " pod="kube-system/kube-proxy-wl8g5" Mar 2 12:52:47.325586 kubelet[2792]: I0302 12:52:47.325196 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dfm\" (UniqueName: \"kubernetes.io/projected/4c465411-8f2b-43b9-b6de-6fed32b4871d-kube-api-access-74dfm\") pod \"kube-proxy-wl8g5\" (UID: \"4c465411-8f2b-43b9-b6de-6fed32b4871d\") " pod="kube-system/kube-proxy-wl8g5" Mar 2 12:52:47.330491 systemd[1]: Created slice kubepods-burstable-pod1bc1ef55_2431_41ce_80df_9c574b5de752.slice - libcontainer container kubepods-burstable-pod1bc1ef55_2431_41ce_80df_9c574b5de752.slice. Mar 2 12:52:47.343513 systemd[1]: Created slice kubepods-besteffort-pod4c465411_8f2b_43b9_b6de_6fed32b4871d.slice - libcontainer container kubepods-besteffort-pod4c465411_8f2b_43b9_b6de_6fed32b4871d.slice. Mar 2 12:52:47.410086 systemd[1]: Created slice kubepods-besteffort-pod2a4de705_4910_4973_a6a4_c3c3945da20c.slice - libcontainer container kubepods-besteffort-pod2a4de705_4910_4973_a6a4_c3c3945da20c.slice. Mar 2 12:52:47.425782 kubelet[2792]: I0302 12:52:47.425671 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-bpf-maps\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.426638 kubelet[2792]: I0302 12:52:47.426389 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-cilium-cgroup\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.426712 kubelet[2792]: I0302 12:52:47.426602 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-cilium-run\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.426760 kubelet[2792]: I0302 12:52:47.426720 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-cni-path\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.426760 kubelet[2792]: I0302 12:52:47.426747 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-etc-cni-netd\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.426831 kubelet[2792]: I0302 12:52:47.426765 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-lib-modules\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.426831 kubelet[2792]: I0302 12:52:47.426782 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1bc1ef55-2431-41ce-80df-9c574b5de752-hubble-tls\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.426831 kubelet[2792]: I0302 12:52:47.426825 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bc1ef55-2431-41ce-80df-9c574b5de752-cilium-config-path\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.427896 kubelet[2792]: I0302 12:52:47.426845 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-host-proc-sys-kernel\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.427896 kubelet[2792]: I0302 12:52:47.426865 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-729qn\" (UniqueName: \"kubernetes.io/projected/1bc1ef55-2431-41ce-80df-9c574b5de752-kube-api-access-729qn\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.427896 kubelet[2792]: I0302 12:52:47.426889 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-hostproc\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.427896 kubelet[2792]: I0302 12:52:47.426910 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-xtables-lock\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.427896 kubelet[2792]: I0302 12:52:47.426927 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1bc1ef55-2431-41ce-80df-9c574b5de752-clustermesh-secrets\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.428004 kubelet[2792]: I0302 12:52:47.426948 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-host-proc-sys-net\") pod \"cilium-8952g\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " pod="kube-system/cilium-8952g" Mar 2 12:52:47.528505 kubelet[2792]: I0302 12:52:47.528354 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a4de705-4910-4973-a6a4-c3c3945da20c-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-tnjdt\" (UID: \"2a4de705-4910-4973-a6a4-c3c3945da20c\") " pod="kube-system/cilium-operator-6f9c7c5859-tnjdt" Mar 2 12:52:47.528505 kubelet[2792]: I0302 12:52:47.528510 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjkgk\" (UniqueName: \"kubernetes.io/projected/2a4de705-4910-4973-a6a4-c3c3945da20c-kube-api-access-zjkgk\") pod \"cilium-operator-6f9c7c5859-tnjdt\" (UID: \"2a4de705-4910-4973-a6a4-c3c3945da20c\") " pod="kube-system/cilium-operator-6f9c7c5859-tnjdt" Mar 2 12:52:47.675385 kubelet[2792]: E0302 12:52:47.675298 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:47.678810 containerd[1568]: time="2026-03-02T12:52:47.678711391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wl8g5,Uid:4c465411-8f2b-43b9-b6de-6fed32b4871d,Namespace:kube-system,Attempt:0,}" Mar 2 12:52:47.718675 kubelet[2792]: E0302 12:52:47.718632 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:47.719470 containerd[1568]: time="2026-03-02T12:52:47.719326338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-tnjdt,Uid:2a4de705-4910-4973-a6a4-c3c3945da20c,Namespace:kube-system,Attempt:0,}" Mar 2 12:52:47.753014 containerd[1568]: time="2026-03-02T12:52:47.752892196Z" level=info msg="connecting to shim ddf8439ff01531b5ed9ec402d3977db472c1179af14508566968f6a81abf7a69" address="unix:///run/containerd/s/a35cf59012fe875f5bf39e86f9a9dd2c3171a73b394fc437569e95121967440c" namespace=k8s.io protocol=ttrpc version=3 Mar 2 12:52:47.795637 containerd[1568]: time="2026-03-02T12:52:47.795542116Z" level=info msg="connecting to shim 21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd" address="unix:///run/containerd/s/58296dd73fdb9b84a968dc1c643c58d09f5bfa2474f5aeed6feb5f0dd309daff" namespace=k8s.io protocol=ttrpc version=3 Mar 2 12:52:47.804832 systemd[1]: Started cri-containerd-ddf8439ff01531b5ed9ec402d3977db472c1179af14508566968f6a81abf7a69.scope - libcontainer container ddf8439ff01531b5ed9ec402d3977db472c1179af14508566968f6a81abf7a69. Mar 2 12:52:47.875218 systemd[1]: Started cri-containerd-21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd.scope - libcontainer container 21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd. Mar 2 12:52:47.922952 containerd[1568]: time="2026-03-02T12:52:47.922836409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wl8g5,Uid:4c465411-8f2b-43b9-b6de-6fed32b4871d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddf8439ff01531b5ed9ec402d3977db472c1179af14508566968f6a81abf7a69\"" Mar 2 12:52:47.924416 kubelet[2792]: E0302 12:52:47.924323 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:47.931559 containerd[1568]: time="2026-03-02T12:52:47.931393884Z" level=info msg="CreateContainer within sandbox \"ddf8439ff01531b5ed9ec402d3977db472c1179af14508566968f6a81abf7a69\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 12:52:47.943485 kubelet[2792]: E0302 12:52:47.943364 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:47.951770 containerd[1568]: time="2026-03-02T12:52:47.951697481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8952g,Uid:1bc1ef55-2431-41ce-80df-9c574b5de752,Namespace:kube-system,Attempt:0,}" Mar 2 12:52:47.970500 containerd[1568]: time="2026-03-02T12:52:47.970207344Z" level=info msg="Container 1cf60f17ea449e39b86b2d0abcdcf20d27f1f38ab3da6245fda20c575b1435c4: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:52:47.972564 containerd[1568]: time="2026-03-02T12:52:47.972532601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-tnjdt,Uid:2a4de705-4910-4973-a6a4-c3c3945da20c,Namespace:kube-system,Attempt:0,} returns sandbox id \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\"" Mar 2 12:52:47.974237 kubelet[2792]: E0302 12:52:47.974031 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:47.978562 containerd[1568]: time="2026-03-02T12:52:47.978377221Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 2 12:52:47.987277 containerd[1568]: time="2026-03-02T12:52:47.987246991Z" level=info msg="CreateContainer within sandbox \"ddf8439ff01531b5ed9ec402d3977db472c1179af14508566968f6a81abf7a69\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1cf60f17ea449e39b86b2d0abcdcf20d27f1f38ab3da6245fda20c575b1435c4\"" Mar 2 12:52:47.988414 containerd[1568]: time="2026-03-02T12:52:47.988328918Z" level=info msg="StartContainer for \"1cf60f17ea449e39b86b2d0abcdcf20d27f1f38ab3da6245fda20c575b1435c4\"" Mar 2 12:52:47.992747 containerd[1568]: time="2026-03-02T12:52:47.992665103Z" level=info msg="connecting to shim 1cf60f17ea449e39b86b2d0abcdcf20d27f1f38ab3da6245fda20c575b1435c4" address="unix:///run/containerd/s/a35cf59012fe875f5bf39e86f9a9dd2c3171a73b394fc437569e95121967440c" protocol=ttrpc version=3 Mar 2 12:52:48.022202 containerd[1568]: time="2026-03-02T12:52:48.022131703Z" level=info msg="connecting to shim 9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28" address="unix:///run/containerd/s/9f77c140af5940bbb762f5938e45b0a55f013ae284d75ddbee9f26073dd5be59" namespace=k8s.io protocol=ttrpc version=3 Mar 2 12:52:48.034748 systemd[1]: Started cri-containerd-1cf60f17ea449e39b86b2d0abcdcf20d27f1f38ab3da6245fda20c575b1435c4.scope - libcontainer container 1cf60f17ea449e39b86b2d0abcdcf20d27f1f38ab3da6245fda20c575b1435c4. Mar 2 12:52:48.077545 sudo[1792]: pam_unix(sudo:session): session closed for user root Mar 2 12:52:48.079275 sshd[1791]: Connection closed by 10.0.0.1 port 42816 Mar 2 12:52:48.080749 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Mar 2 12:52:48.088866 systemd[1]: Started cri-containerd-9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28.scope - libcontainer container 9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28. Mar 2 12:52:48.089683 systemd[1]: sshd@8-10.0.0.17:22-10.0.0.1:42816.service: Deactivated successfully. Mar 2 12:52:48.093278 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 12:52:48.094396 systemd[1]: session-9.scope: Consumed 10.855s CPU time, 277.3M memory peak. Mar 2 12:52:48.097897 systemd-logind[1549]: Session 9 logged out. Waiting for processes to exit. Mar 2 12:52:48.100293 systemd-logind[1549]: Removed session 9. Mar 2 12:52:48.133406 containerd[1568]: time="2026-03-02T12:52:48.133296058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8952g,Uid:1bc1ef55-2431-41ce-80df-9c574b5de752,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\"" Mar 2 12:52:48.134407 kubelet[2792]: E0302 12:52:48.134384 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:48.136486 containerd[1568]: time="2026-03-02T12:52:48.136370249Z" level=info msg="StartContainer for \"1cf60f17ea449e39b86b2d0abcdcf20d27f1f38ab3da6245fda20c575b1435c4\" returns successfully" Mar 2 12:52:48.773696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878791461.mount: Deactivated successfully. Mar 2 12:52:48.899002 kubelet[2792]: E0302 12:52:48.898923 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:48.915294 kubelet[2792]: I0302 12:52:48.915158 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wl8g5" podStartSLOduration=1.915137544 podStartE2EDuration="1.915137544s" podCreationTimestamp="2026-03-02 12:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:52:48.914807789 +0000 UTC m=+6.674224568" watchObservedRunningTime="2026-03-02 12:52:48.915137544 +0000 UTC m=+6.674554293" Mar 2 12:52:53.648989 kubelet[2792]: E0302 12:52:53.645355 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:53.688966 kubelet[2792]: E0302 12:52:53.688927 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:54.632516 kubelet[2792]: E0302 12:52:54.632296 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:54.827207 containerd[1568]: time="2026-03-02T12:52:54.827116281Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:54.828346 containerd[1568]: time="2026-03-02T12:52:54.828179974Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 2 12:52:54.829408 containerd[1568]: time="2026-03-02T12:52:54.829322205Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:52:54.830939 containerd[1568]: time="2026-03-02T12:52:54.830883336Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.852338734s" Mar 2 12:52:54.830939 containerd[1568]: time="2026-03-02T12:52:54.830929853Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 2 12:52:54.832845 containerd[1568]: time="2026-03-02T12:52:54.832765784Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 2 12:52:54.838828 containerd[1568]: time="2026-03-02T12:52:54.838676477Z" level=info msg="CreateContainer within sandbox \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 2 12:52:54.857086 containerd[1568]: time="2026-03-02T12:52:54.856864477Z" level=info msg="Container f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:52:54.871225 containerd[1568]: time="2026-03-02T12:52:54.871121525Z" level=info msg="CreateContainer within sandbox \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe\"" Mar 2 12:52:54.872194 containerd[1568]: time="2026-03-02T12:52:54.872061405Z" level=info msg="StartContainer for \"f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe\"" Mar 2 12:52:54.873901 containerd[1568]: time="2026-03-02T12:52:54.873859369Z" level=info msg="connecting to shim f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe" address="unix:///run/containerd/s/58296dd73fdb9b84a968dc1c643c58d09f5bfa2474f5aeed6feb5f0dd309daff" protocol=ttrpc version=3 Mar 2 12:52:54.934768 systemd[1]: Started cri-containerd-f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe.scope - libcontainer container f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe. Mar 2 12:52:55.018994 containerd[1568]: time="2026-03-02T12:52:55.018919459Z" level=info msg="StartContainer for \"f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe\" returns successfully" Mar 2 12:52:56.267204 kubelet[2792]: E0302 12:52:56.264337 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:52:56.572562 kubelet[2792]: I0302 12:52:56.563577 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-tnjdt" podStartSLOduration=2.7078050940000002 podStartE2EDuration="9.563351749s" podCreationTimestamp="2026-03-02 12:52:47 +0000 UTC" firstStartedPulling="2026-03-02 12:52:47.97698768 +0000 UTC m=+5.736404428" lastFinishedPulling="2026-03-02 12:52:54.832534334 +0000 UTC m=+12.591951083" observedRunningTime="2026-03-02 12:52:56.560310342 +0000 UTC m=+14.319727090" watchObservedRunningTime="2026-03-02 12:52:56.563351749 +0000 UTC m=+14.322768517" Mar 2 12:52:57.391526 kubelet[2792]: E0302 12:52:57.391310 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:09.599396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2903361669.mount: Deactivated successfully. Mar 2 12:53:15.308931 containerd[1568]: time="2026-03-02T12:53:15.308213133Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:53:15.310204 containerd[1568]: time="2026-03-02T12:53:15.309605531Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 2 12:53:15.310927 containerd[1568]: time="2026-03-02T12:53:15.310867518Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:53:15.313313 containerd[1568]: time="2026-03-02T12:53:15.313275461Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 20.480452681s" Mar 2 12:53:15.313466 containerd[1568]: time="2026-03-02T12:53:15.313317299Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 2 12:53:15.319895 containerd[1568]: time="2026-03-02T12:53:15.319564484Z" level=info msg="CreateContainer within sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 12:53:15.330002 containerd[1568]: time="2026-03-02T12:53:15.329908016Z" level=info msg="Container 12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:53:15.340841 containerd[1568]: time="2026-03-02T12:53:15.340758362Z" level=info msg="CreateContainer within sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc\"" Mar 2 12:53:15.341846 containerd[1568]: time="2026-03-02T12:53:15.341711751Z" level=info msg="StartContainer for \"12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc\"" Mar 2 12:53:15.343237 containerd[1568]: time="2026-03-02T12:53:15.343166496Z" level=info msg="connecting to shim 12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc" address="unix:///run/containerd/s/9f77c140af5940bbb762f5938e45b0a55f013ae284d75ddbee9f26073dd5be59" protocol=ttrpc version=3 Mar 2 12:53:15.419813 systemd[1]: Started cri-containerd-12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc.scope - libcontainer container 12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc. Mar 2 12:53:15.481877 containerd[1568]: time="2026-03-02T12:53:15.481840261Z" level=info msg="StartContainer for \"12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc\" returns successfully" Mar 2 12:53:15.530154 systemd[1]: cri-containerd-12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc.scope: Deactivated successfully. Mar 2 12:53:15.532114 systemd[1]: cri-containerd-12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc.scope: Consumed 64ms CPU time, 6.9M memory peak, 4K read from disk, 3.2M written to disk. Mar 2 12:53:15.542220 containerd[1568]: time="2026-03-02T12:53:15.542074885Z" level=info msg="received container exit event container_id:\"12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc\" id:\"12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc\" pid:3276 exited_at:{seconds:1772455995 nanos:540090209}" Mar 2 12:53:15.623939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc-rootfs.mount: Deactivated successfully. Mar 2 12:53:15.797043 kubelet[2792]: E0302 12:53:15.796571 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:16.801866 kubelet[2792]: E0302 12:53:16.801720 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:16.811391 containerd[1568]: time="2026-03-02T12:53:16.811328476Z" level=info msg="CreateContainer within sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 12:53:16.832542 containerd[1568]: time="2026-03-02T12:53:16.832331677Z" level=info msg="Container 84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:53:16.842957 containerd[1568]: time="2026-03-02T12:53:16.842882362Z" level=info msg="CreateContainer within sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b\"" Mar 2 12:53:16.844187 containerd[1568]: time="2026-03-02T12:53:16.844157564Z" level=info msg="StartContainer for \"84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b\"" Mar 2 12:53:16.845607 containerd[1568]: time="2026-03-02T12:53:16.845542238Z" level=info msg="connecting to shim 84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b" address="unix:///run/containerd/s/9f77c140af5940bbb762f5938e45b0a55f013ae284d75ddbee9f26073dd5be59" protocol=ttrpc version=3 Mar 2 12:53:16.890759 systemd[1]: Started cri-containerd-84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b.scope - libcontainer container 84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b. Mar 2 12:53:16.944636 containerd[1568]: time="2026-03-02T12:53:16.944544840Z" level=info msg="StartContainer for \"84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b\" returns successfully" Mar 2 12:53:16.975936 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 12:53:16.977249 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:53:16.977886 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:53:16.980757 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:53:16.983897 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 2 12:53:16.984639 systemd[1]: cri-containerd-84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b.scope: Deactivated successfully. Mar 2 12:53:16.985796 containerd[1568]: time="2026-03-02T12:53:16.985762154Z" level=info msg="received container exit event container_id:\"84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b\" id:\"84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b\" pid:3321 exited_at:{seconds:1772455996 nanos:985238026}" Mar 2 12:53:17.041785 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:53:17.815491 kubelet[2792]: E0302 12:53:17.815287 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:17.823793 containerd[1568]: time="2026-03-02T12:53:17.823579287Z" level=info msg="CreateContainer within sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 12:53:17.832264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b-rootfs.mount: Deactivated successfully. Mar 2 12:53:17.850198 containerd[1568]: time="2026-03-02T12:53:17.850128337Z" level=info msg="Container d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:53:17.873923 containerd[1568]: time="2026-03-02T12:53:17.873779203Z" level=info msg="CreateContainer within sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512\"" Mar 2 12:53:17.875247 containerd[1568]: time="2026-03-02T12:53:17.875167644Z" level=info msg="StartContainer for \"d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512\"" Mar 2 12:53:17.877775 containerd[1568]: time="2026-03-02T12:53:17.877680424Z" level=info msg="connecting to shim d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512" address="unix:///run/containerd/s/9f77c140af5940bbb762f5938e45b0a55f013ae284d75ddbee9f26073dd5be59" protocol=ttrpc version=3 Mar 2 12:53:17.916734 systemd[1]: Started cri-containerd-d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512.scope - libcontainer container d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512. Mar 2 12:53:18.032356 containerd[1568]: time="2026-03-02T12:53:18.032263488Z" level=info msg="StartContainer for \"d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512\" returns successfully" Mar 2 12:53:18.035263 systemd[1]: cri-containerd-d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512.scope: Deactivated successfully. Mar 2 12:53:18.037881 containerd[1568]: time="2026-03-02T12:53:18.037814826Z" level=info msg="received container exit event container_id:\"d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512\" id:\"d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512\" pid:3367 exited_at:{seconds:1772455998 nanos:37560752}" Mar 2 12:53:18.086287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512-rootfs.mount: Deactivated successfully. Mar 2 12:53:18.828500 kubelet[2792]: E0302 12:53:18.828376 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:18.835954 containerd[1568]: time="2026-03-02T12:53:18.835865596Z" level=info msg="CreateContainer within sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 12:53:18.856212 containerd[1568]: time="2026-03-02T12:53:18.855300648Z" level=info msg="Container bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:53:18.862085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3332308051.mount: Deactivated successfully. Mar 2 12:53:18.869649 containerd[1568]: time="2026-03-02T12:53:18.869565426Z" level=info msg="CreateContainer within sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c\"" Mar 2 12:53:18.870478 containerd[1568]: time="2026-03-02T12:53:18.870318322Z" level=info msg="StartContainer for \"bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c\"" Mar 2 12:53:18.871644 containerd[1568]: time="2026-03-02T12:53:18.871573424Z" level=info msg="connecting to shim bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c" address="unix:///run/containerd/s/9f77c140af5940bbb762f5938e45b0a55f013ae284d75ddbee9f26073dd5be59" protocol=ttrpc version=3 Mar 2 12:53:18.912842 systemd[1]: Started cri-containerd-bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c.scope - libcontainer container bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c. Mar 2 12:53:18.962663 systemd[1]: cri-containerd-bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c.scope: Deactivated successfully. Mar 2 12:53:18.965192 containerd[1568]: time="2026-03-02T12:53:18.965125534Z" level=info msg="received container exit event container_id:\"bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c\" id:\"bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c\" pid:3409 exited_at:{seconds:1772455998 nanos:963002176}" Mar 2 12:53:18.979617 containerd[1568]: time="2026-03-02T12:53:18.979561860Z" level=info msg="StartContainer for \"bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c\" returns successfully" Mar 2 12:53:18.995580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c-rootfs.mount: Deactivated successfully. Mar 2 12:53:19.835191 kubelet[2792]: E0302 12:53:19.835130 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:19.840514 containerd[1568]: time="2026-03-02T12:53:19.840470478Z" level=info msg="CreateContainer within sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 12:53:19.871861 containerd[1568]: time="2026-03-02T12:53:19.870861982Z" level=info msg="Container 45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:53:19.883539 containerd[1568]: time="2026-03-02T12:53:19.883381660Z" level=info msg="CreateContainer within sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490\"" Mar 2 12:53:19.884298 containerd[1568]: time="2026-03-02T12:53:19.884235647Z" level=info msg="StartContainer for \"45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490\"" Mar 2 12:53:19.885551 containerd[1568]: time="2026-03-02T12:53:19.885397376Z" level=info msg="connecting to shim 45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490" address="unix:///run/containerd/s/9f77c140af5940bbb762f5938e45b0a55f013ae284d75ddbee9f26073dd5be59" protocol=ttrpc version=3 Mar 2 12:53:19.916653 systemd[1]: Started cri-containerd-45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490.scope - libcontainer container 45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490. Mar 2 12:53:20.006470 containerd[1568]: time="2026-03-02T12:53:20.006317211Z" level=info msg="StartContainer for \"45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490\" returns successfully" Mar 2 12:53:20.201110 kubelet[2792]: I0302 12:53:20.201011 2792 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 2 12:53:20.264027 systemd[1]: Created slice kubepods-burstable-pod266b8f84_bd70_4e42_a519_f04b43dddb2e.slice - libcontainer container kubepods-burstable-pod266b8f84_bd70_4e42_a519_f04b43dddb2e.slice. Mar 2 12:53:20.275760 systemd[1]: Created slice kubepods-burstable-pod6325fa4b_1755_48b4_b3f7_f25b8f6ed550.slice - libcontainer container kubepods-burstable-pod6325fa4b_1755_48b4_b3f7_f25b8f6ed550.slice. Mar 2 12:53:20.296144 kubelet[2792]: I0302 12:53:20.296104 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/266b8f84-bd70-4e42-a519-f04b43dddb2e-config-volume\") pod \"coredns-66bc5c9577-pdjzp\" (UID: \"266b8f84-bd70-4e42-a519-f04b43dddb2e\") " pod="kube-system/coredns-66bc5c9577-pdjzp" Mar 2 12:53:20.296520 kubelet[2792]: I0302 12:53:20.296399 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wtgm\" (UniqueName: \"kubernetes.io/projected/266b8f84-bd70-4e42-a519-f04b43dddb2e-kube-api-access-2wtgm\") pod \"coredns-66bc5c9577-pdjzp\" (UID: \"266b8f84-bd70-4e42-a519-f04b43dddb2e\") " pod="kube-system/coredns-66bc5c9577-pdjzp" Mar 2 12:53:20.397666 kubelet[2792]: I0302 12:53:20.397527 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs2cq\" (UniqueName: \"kubernetes.io/projected/6325fa4b-1755-48b4-b3f7-f25b8f6ed550-kube-api-access-zs2cq\") pod \"coredns-66bc5c9577-qtx5m\" (UID: \"6325fa4b-1755-48b4-b3f7-f25b8f6ed550\") " pod="kube-system/coredns-66bc5c9577-qtx5m" Mar 2 12:53:20.397666 kubelet[2792]: I0302 12:53:20.397596 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6325fa4b-1755-48b4-b3f7-f25b8f6ed550-config-volume\") pod \"coredns-66bc5c9577-qtx5m\" (UID: \"6325fa4b-1755-48b4-b3f7-f25b8f6ed550\") " pod="kube-system/coredns-66bc5c9577-qtx5m" Mar 2 12:53:20.579881 kubelet[2792]: E0302 12:53:20.579689 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:20.581305 containerd[1568]: time="2026-03-02T12:53:20.581200857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pdjzp,Uid:266b8f84-bd70-4e42-a519-f04b43dddb2e,Namespace:kube-system,Attempt:0,}" Mar 2 12:53:20.584284 kubelet[2792]: E0302 12:53:20.584139 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:20.585116 containerd[1568]: time="2026-03-02T12:53:20.585070306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qtx5m,Uid:6325fa4b-1755-48b4-b3f7-f25b8f6ed550,Namespace:kube-system,Attempt:0,}" Mar 2 12:53:20.844301 kubelet[2792]: E0302 12:53:20.844061 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:20.872297 kubelet[2792]: I0302 12:53:20.872196 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8952g" podStartSLOduration=6.692964979 podStartE2EDuration="33.872123379s" podCreationTimestamp="2026-03-02 12:52:47 +0000 UTC" firstStartedPulling="2026-03-02 12:52:48.135158104 +0000 UTC m=+5.894574852" lastFinishedPulling="2026-03-02 12:53:15.314316504 +0000 UTC m=+33.073733252" observedRunningTime="2026-03-02 12:53:20.871541794 +0000 UTC m=+38.630958562" watchObservedRunningTime="2026-03-02 12:53:20.872123379 +0000 UTC m=+38.631540127" Mar 2 12:53:21.865236 kubelet[2792]: E0302 12:53:21.864825 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:22.533009 systemd-networkd[1479]: cilium_host: Link UP Mar 2 12:53:22.534981 systemd-networkd[1479]: cilium_net: Link UP Mar 2 12:53:22.535673 systemd-networkd[1479]: cilium_net: Gained carrier Mar 2 12:53:22.536801 systemd-networkd[1479]: cilium_host: Gained carrier Mar 2 12:53:22.697628 systemd-networkd[1479]: cilium_host: Gained IPv6LL Mar 2 12:53:22.702210 systemd-networkd[1479]: cilium_vxlan: Link UP Mar 2 12:53:22.702236 systemd-networkd[1479]: cilium_vxlan: Gained carrier Mar 2 12:53:22.867509 kubelet[2792]: E0302 12:53:22.867362 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:23.005568 kernel: NET: Registered PF_ALG protocol family Mar 2 12:53:23.497924 systemd-networkd[1479]: cilium_net: Gained IPv6LL Mar 2 12:53:23.869487 kubelet[2792]: E0302 12:53:23.869309 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:23.978370 systemd-networkd[1479]: lxc_health: Link UP Mar 2 12:53:23.982917 systemd-networkd[1479]: lxc_health: Gained carrier Mar 2 12:53:24.152035 kernel: eth0: renamed from tmp9e545 Mar 2 12:53:24.154042 systemd-networkd[1479]: lxc419924ddc9c9: Link UP Mar 2 12:53:24.161923 systemd-networkd[1479]: lxc2de3119d7e0f: Link UP Mar 2 12:53:24.172571 kernel: eth0: renamed from tmp0eae3 Mar 2 12:53:24.176162 systemd-networkd[1479]: lxc419924ddc9c9: Gained carrier Mar 2 12:53:24.177005 systemd-networkd[1479]: lxc2de3119d7e0f: Gained carrier Mar 2 12:53:24.521550 systemd-networkd[1479]: cilium_vxlan: Gained IPv6LL Mar 2 12:53:25.096718 systemd-networkd[1479]: lxc_health: Gained IPv6LL Mar 2 12:53:25.288859 systemd-networkd[1479]: lxc2de3119d7e0f: Gained IPv6LL Mar 2 12:53:25.289298 systemd-networkd[1479]: lxc419924ddc9c9: Gained IPv6LL Mar 2 12:53:25.942636 kubelet[2792]: E0302 12:53:25.942513 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:26.877306 kubelet[2792]: E0302 12:53:26.877201 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:28.050647 containerd[1568]: time="2026-03-02T12:53:28.050587177Z" level=info msg="connecting to shim 9e54592e1349c92d87a086ef4a7f53641793e48a59390cbca49a2249cd332a7c" address="unix:///run/containerd/s/ee436e53c91e677b4b84ef20d8c3c7ad38952903489133e0302d33ebf00d1034" namespace=k8s.io protocol=ttrpc version=3 Mar 2 12:53:28.051839 containerd[1568]: time="2026-03-02T12:53:28.051754837Z" level=info msg="connecting to shim 0eae3cb0cb617968460bab3b737bb0553fb666fd0bbc178f1abb7462080737f8" address="unix:///run/containerd/s/bf7657b281b4731b28132ea5d85167394ccbe6491b10028e8c3022b907f815fc" namespace=k8s.io protocol=ttrpc version=3 Mar 2 12:53:28.110839 systemd[1]: Started cri-containerd-9e54592e1349c92d87a086ef4a7f53641793e48a59390cbca49a2249cd332a7c.scope - libcontainer container 9e54592e1349c92d87a086ef4a7f53641793e48a59390cbca49a2249cd332a7c. Mar 2 12:53:28.117302 systemd[1]: Started cri-containerd-0eae3cb0cb617968460bab3b737bb0553fb666fd0bbc178f1abb7462080737f8.scope - libcontainer container 0eae3cb0cb617968460bab3b737bb0553fb666fd0bbc178f1abb7462080737f8. Mar 2 12:53:28.139532 systemd-resolved[1486]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:53:28.146917 systemd-resolved[1486]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:53:28.207473 containerd[1568]: time="2026-03-02T12:53:28.207351514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qtx5m,Uid:6325fa4b-1755-48b4-b3f7-f25b8f6ed550,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e54592e1349c92d87a086ef4a7f53641793e48a59390cbca49a2249cd332a7c\"" Mar 2 12:53:28.208365 kubelet[2792]: E0302 12:53:28.208257 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:28.209938 containerd[1568]: time="2026-03-02T12:53:28.209779187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pdjzp,Uid:266b8f84-bd70-4e42-a519-f04b43dddb2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eae3cb0cb617968460bab3b737bb0553fb666fd0bbc178f1abb7462080737f8\"" Mar 2 12:53:28.210824 kubelet[2792]: E0302 12:53:28.210781 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:28.214849 containerd[1568]: time="2026-03-02T12:53:28.214804357Z" level=info msg="CreateContainer within sandbox \"9e54592e1349c92d87a086ef4a7f53641793e48a59390cbca49a2249cd332a7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 12:53:28.218046 containerd[1568]: time="2026-03-02T12:53:28.217998298Z" level=info msg="CreateContainer within sandbox \"0eae3cb0cb617968460bab3b737bb0553fb666fd0bbc178f1abb7462080737f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 12:53:28.240726 containerd[1568]: time="2026-03-02T12:53:28.240612887Z" level=info msg="Container 9c085e26c437f225c2c0b4342ad7183a97327b23d23659be02fbe769dbad1d68: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:53:28.251232 containerd[1568]: time="2026-03-02T12:53:28.251168350Z" level=info msg="CreateContainer within sandbox \"9e54592e1349c92d87a086ef4a7f53641793e48a59390cbca49a2249cd332a7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c085e26c437f225c2c0b4342ad7183a97327b23d23659be02fbe769dbad1d68\"" Mar 2 12:53:28.251815 containerd[1568]: time="2026-03-02T12:53:28.251783855Z" level=info msg="Container 47c4f9ffc88ba867ca87e0710a87076a3bc38f1e12901f082fdd18eec776d4c7: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:53:28.253916 containerd[1568]: time="2026-03-02T12:53:28.252415390Z" level=info msg="StartContainer for \"9c085e26c437f225c2c0b4342ad7183a97327b23d23659be02fbe769dbad1d68\"" Mar 2 12:53:28.255078 containerd[1568]: time="2026-03-02T12:53:28.254981994Z" level=info msg="connecting to shim 9c085e26c437f225c2c0b4342ad7183a97327b23d23659be02fbe769dbad1d68" address="unix:///run/containerd/s/ee436e53c91e677b4b84ef20d8c3c7ad38952903489133e0302d33ebf00d1034" protocol=ttrpc version=3 Mar 2 12:53:28.260703 containerd[1568]: time="2026-03-02T12:53:28.260584672Z" level=info msg="CreateContainer within sandbox \"0eae3cb0cb617968460bab3b737bb0553fb666fd0bbc178f1abb7462080737f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"47c4f9ffc88ba867ca87e0710a87076a3bc38f1e12901f082fdd18eec776d4c7\"" Mar 2 12:53:28.261588 containerd[1568]: time="2026-03-02T12:53:28.261495796Z" level=info msg="StartContainer for \"47c4f9ffc88ba867ca87e0710a87076a3bc38f1e12901f082fdd18eec776d4c7\"" Mar 2 12:53:28.276551 containerd[1568]: time="2026-03-02T12:53:28.276492210Z" level=info msg="connecting to shim 47c4f9ffc88ba867ca87e0710a87076a3bc38f1e12901f082fdd18eec776d4c7" address="unix:///run/containerd/s/bf7657b281b4731b28132ea5d85167394ccbe6491b10028e8c3022b907f815fc" protocol=ttrpc version=3 Mar 2 12:53:28.278648 systemd[1]: Started cri-containerd-9c085e26c437f225c2c0b4342ad7183a97327b23d23659be02fbe769dbad1d68.scope - libcontainer container 9c085e26c437f225c2c0b4342ad7183a97327b23d23659be02fbe769dbad1d68. Mar 2 12:53:28.317718 systemd[1]: Started cri-containerd-47c4f9ffc88ba867ca87e0710a87076a3bc38f1e12901f082fdd18eec776d4c7.scope - libcontainer container 47c4f9ffc88ba867ca87e0710a87076a3bc38f1e12901f082fdd18eec776d4c7. Mar 2 12:53:28.426970 containerd[1568]: time="2026-03-02T12:53:28.426837366Z" level=info msg="StartContainer for \"47c4f9ffc88ba867ca87e0710a87076a3bc38f1e12901f082fdd18eec776d4c7\" returns successfully" Mar 2 12:53:28.427630 containerd[1568]: time="2026-03-02T12:53:28.427230807Z" level=info msg="StartContainer for \"9c085e26c437f225c2c0b4342ad7183a97327b23d23659be02fbe769dbad1d68\" returns successfully" Mar 2 12:53:28.890347 kubelet[2792]: E0302 12:53:28.890139 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:28.901457 kubelet[2792]: E0302 12:53:28.901217 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:28.914477 kubelet[2792]: I0302 12:53:28.913881 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pdjzp" podStartSLOduration=41.913859719 podStartE2EDuration="41.913859719s" podCreationTimestamp="2026-03-02 12:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:53:28.911186406 +0000 UTC m=+46.670603194" watchObservedRunningTime="2026-03-02 12:53:28.913859719 +0000 UTC m=+46.673276487" Mar 2 12:53:28.956841 kubelet[2792]: I0302 12:53:28.956743 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qtx5m" podStartSLOduration=41.956725551 podStartE2EDuration="41.956725551s" podCreationTimestamp="2026-03-02 12:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:53:28.956072536 +0000 UTC m=+46.715489294" watchObservedRunningTime="2026-03-02 12:53:28.956725551 +0000 UTC m=+46.716142299" Mar 2 12:53:29.903731 kubelet[2792]: E0302 12:53:29.903586 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:29.904160 kubelet[2792]: E0302 12:53:29.904005 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:30.908935 kubelet[2792]: E0302 12:53:30.908638 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:53:30.908935 kubelet[2792]: E0302 12:53:30.908668 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:54:00.449149 kubelet[2792]: E0302 12:54:00.448690 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:54:07.451921 kubelet[2792]: E0302 12:54:07.450875 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:54:12.447893 kubelet[2792]: E0302 12:54:12.447308 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:54:15.445251 kubelet[2792]: E0302 12:54:15.445190 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:54:16.180327 systemd[1]: Started sshd@9-10.0.0.17:22-10.0.0.1:37766.service - OpenSSH per-connection server daemon (10.0.0.1:37766). Mar 2 12:54:16.260071 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 37766 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:54:16.262078 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:16.269217 systemd-logind[1549]: New session 10 of user core. Mar 2 12:54:16.282713 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 12:54:16.390370 sshd[4127]: Connection closed by 10.0.0.1 port 37766 Mar 2 12:54:16.391003 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Mar 2 12:54:16.395713 systemd[1]: sshd@9-10.0.0.17:22-10.0.0.1:37766.service: Deactivated successfully. Mar 2 12:54:16.397869 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 12:54:16.399366 systemd-logind[1549]: Session 10 logged out. Waiting for processes to exit. Mar 2 12:54:16.401207 systemd-logind[1549]: Removed session 10. Mar 2 12:54:17.445339 kubelet[2792]: E0302 12:54:17.445202 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:54:21.410056 systemd[1]: Started sshd@10-10.0.0.17:22-10.0.0.1:44414.service - OpenSSH per-connection server daemon (10.0.0.1:44414). Mar 2 12:54:21.477284 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 44414 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:54:21.479388 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:21.486345 systemd-logind[1549]: New session 11 of user core. Mar 2 12:54:21.496591 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 12:54:21.584545 sshd[4146]: Connection closed by 10.0.0.1 port 44414 Mar 2 12:54:21.585887 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Mar 2 12:54:21.590728 systemd[1]: sshd@10-10.0.0.17:22-10.0.0.1:44414.service: Deactivated successfully. Mar 2 12:54:21.592899 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 12:54:21.593938 systemd-logind[1549]: Session 11 logged out. Waiting for processes to exit. Mar 2 12:54:21.595550 systemd-logind[1549]: Removed session 11. Mar 2 12:54:26.602888 systemd[1]: Started sshd@11-10.0.0.17:22-10.0.0.1:44424.service - OpenSSH per-connection server daemon (10.0.0.1:44424). Mar 2 12:54:26.662315 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 44424 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:54:26.663874 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:26.670795 systemd-logind[1549]: New session 12 of user core. Mar 2 12:54:26.682728 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 12:54:26.778722 sshd[4163]: Connection closed by 10.0.0.1 port 44424 Mar 2 12:54:26.779083 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Mar 2 12:54:26.784918 systemd[1]: sshd@11-10.0.0.17:22-10.0.0.1:44424.service: Deactivated successfully. Mar 2 12:54:26.787864 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 12:54:26.789301 systemd-logind[1549]: Session 12 logged out. Waiting for processes to exit. Mar 2 12:54:26.791244 systemd-logind[1549]: Removed session 12. Mar 2 12:54:31.801084 systemd[1]: Started sshd@12-10.0.0.17:22-10.0.0.1:56384.service - OpenSSH per-connection server daemon (10.0.0.1:56384). Mar 2 12:54:31.865591 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 56384 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:54:31.867388 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:31.873525 systemd-logind[1549]: New session 13 of user core. Mar 2 12:54:31.883725 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 12:54:31.997331 sshd[4180]: Connection closed by 10.0.0.1 port 56384 Mar 2 12:54:31.998380 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Mar 2 12:54:32.004901 systemd[1]: sshd@12-10.0.0.17:22-10.0.0.1:56384.service: Deactivated successfully. Mar 2 12:54:32.008295 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 12:54:32.009934 systemd-logind[1549]: Session 13 logged out. Waiting for processes to exit. Mar 2 12:54:32.011786 systemd-logind[1549]: Removed session 13. Mar 2 12:54:37.014216 systemd[1]: Started sshd@13-10.0.0.17:22-10.0.0.1:56400.service - OpenSSH per-connection server daemon (10.0.0.1:56400). Mar 2 12:54:37.076890 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 56400 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:54:37.078753 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:37.086157 systemd-logind[1549]: New session 14 of user core. Mar 2 12:54:37.093655 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 12:54:37.182123 sshd[4197]: Connection closed by 10.0.0.1 port 56400 Mar 2 12:54:37.182544 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Mar 2 12:54:37.187225 systemd[1]: sshd@13-10.0.0.17:22-10.0.0.1:56400.service: Deactivated successfully. Mar 2 12:54:37.189343 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 12:54:37.190460 systemd-logind[1549]: Session 14 logged out. Waiting for processes to exit. Mar 2 12:54:37.192285 systemd-logind[1549]: Removed session 14. Mar 2 12:54:42.202758 systemd[1]: Started sshd@14-10.0.0.17:22-10.0.0.1:47532.service - OpenSSH per-connection server daemon (10.0.0.1:47532). Mar 2 12:54:42.273615 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 47532 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:54:42.275360 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:42.282295 systemd-logind[1549]: New session 15 of user core. Mar 2 12:54:42.296751 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 12:54:42.398976 sshd[4214]: Connection closed by 10.0.0.1 port 47532 Mar 2 12:54:42.399412 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Mar 2 12:54:42.404799 systemd[1]: sshd@14-10.0.0.17:22-10.0.0.1:47532.service: Deactivated successfully. Mar 2 12:54:42.407664 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 12:54:42.409274 systemd-logind[1549]: Session 15 logged out. Waiting for processes to exit. Mar 2 12:54:42.411718 systemd-logind[1549]: Removed session 15. Mar 2 12:54:47.425734 systemd[1]: Started sshd@15-10.0.0.17:22-10.0.0.1:47542.service - OpenSSH per-connection server daemon (10.0.0.1:47542). Mar 2 12:54:47.502164 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 47542 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:54:47.504532 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:47.511630 systemd-logind[1549]: New session 16 of user core. Mar 2 12:54:47.522773 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 12:54:47.623381 sshd[4233]: Connection closed by 10.0.0.1 port 47542 Mar 2 12:54:47.623964 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Mar 2 12:54:47.629766 systemd-logind[1549]: Session 16 logged out. Waiting for processes to exit. Mar 2 12:54:47.630213 systemd[1]: sshd@15-10.0.0.17:22-10.0.0.1:47542.service: Deactivated successfully. Mar 2 12:54:47.632827 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 12:54:47.635543 systemd-logind[1549]: Removed session 16. Mar 2 12:54:49.445498 kubelet[2792]: E0302 12:54:49.445353 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:54:52.675566 systemd[1]: Started sshd@16-10.0.0.17:22-10.0.0.1:54258.service - OpenSSH per-connection server daemon (10.0.0.1:54258). Mar 2 12:54:52.777611 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 54258 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:54:52.785000 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:52.795494 systemd-logind[1549]: New session 17 of user core. Mar 2 12:54:52.801760 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 12:54:53.007204 sshd[4252]: Connection closed by 10.0.0.1 port 54258 Mar 2 12:54:53.011189 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Mar 2 12:54:53.030948 systemd[1]: sshd@16-10.0.0.17:22-10.0.0.1:54258.service: Deactivated successfully. Mar 2 12:54:53.036670 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 12:54:53.038834 systemd-logind[1549]: Session 17 logged out. Waiting for processes to exit. Mar 2 12:54:53.049264 systemd[1]: Started sshd@17-10.0.0.17:22-10.0.0.1:54274.service - OpenSSH per-connection server daemon (10.0.0.1:54274). Mar 2 12:54:53.053035 systemd-logind[1549]: Removed session 17. Mar 2 12:54:53.142156 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 54274 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:54:53.144291 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:53.155101 systemd-logind[1549]: New session 18 of user core. Mar 2 12:54:53.171864 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 12:54:53.343097 sshd[4269]: Connection closed by 10.0.0.1 port 54274 Mar 2 12:54:53.342685 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Mar 2 12:54:53.357557 systemd[1]: sshd@17-10.0.0.17:22-10.0.0.1:54274.service: Deactivated successfully. Mar 2 12:54:53.362895 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 12:54:53.365681 systemd-logind[1549]: Session 18 logged out. Waiting for processes to exit. Mar 2 12:54:53.372282 systemd[1]: Started sshd@18-10.0.0.17:22-10.0.0.1:54280.service - OpenSSH per-connection server daemon (10.0.0.1:54280). Mar 2 12:54:53.373596 systemd-logind[1549]: Removed session 18. Mar 2 12:54:53.442248 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 54280 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:54:53.444246 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:53.455132 systemd-logind[1549]: New session 19 of user core. Mar 2 12:54:53.464650 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 12:54:53.581209 sshd[4283]: Connection closed by 10.0.0.1 port 54280 Mar 2 12:54:53.581770 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Mar 2 12:54:53.587414 systemd[1]: sshd@18-10.0.0.17:22-10.0.0.1:54280.service: Deactivated successfully. Mar 2 12:54:53.590827 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 12:54:53.592304 systemd-logind[1549]: Session 19 logged out. Waiting for processes to exit. Mar 2 12:54:53.594120 systemd-logind[1549]: Removed session 19. Mar 2 12:54:56.458847 kubelet[2792]: E0302 12:54:56.456120 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:54:58.635216 systemd[1]: Started sshd@19-10.0.0.17:22-10.0.0.1:54288.service - OpenSSH per-connection server daemon (10.0.0.1:54288). Mar 2 12:54:58.813788 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 54288 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:54:58.820294 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:54:58.838316 systemd-logind[1549]: New session 20 of user core. Mar 2 12:54:58.860826 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 12:54:59.282827 sshd[4300]: Connection closed by 10.0.0.1 port 54288 Mar 2 12:54:59.281052 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Mar 2 12:54:59.432286 systemd[1]: sshd@19-10.0.0.17:22-10.0.0.1:54288.service: Deactivated successfully. Mar 2 12:54:59.455950 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 12:54:59.464864 systemd-logind[1549]: Session 20 logged out. Waiting for processes to exit. Mar 2 12:54:59.475270 systemd-logind[1549]: Removed session 20. Mar 2 12:55:00.457308 kubelet[2792]: E0302 12:55:00.456579 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:04.312143 systemd[1]: Started sshd@20-10.0.0.17:22-10.0.0.1:46028.service - OpenSSH per-connection server daemon (10.0.0.1:46028). Mar 2 12:55:04.443880 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 46028 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:55:04.455256 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:04.482954 systemd-logind[1549]: New session 21 of user core. Mar 2 12:55:04.502091 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 12:55:05.371834 sshd[4317]: Connection closed by 10.0.0.1 port 46028 Mar 2 12:55:05.374211 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:05.381966 systemd[1]: sshd@20-10.0.0.17:22-10.0.0.1:46028.service: Deactivated successfully. Mar 2 12:55:05.386946 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 12:55:05.390591 systemd-logind[1549]: Session 21 logged out. Waiting for processes to exit. Mar 2 12:55:05.396069 systemd-logind[1549]: Removed session 21. Mar 2 12:55:10.386191 systemd[1]: Started sshd@21-10.0.0.17:22-10.0.0.1:40180.service - OpenSSH per-connection server daemon (10.0.0.1:40180). Mar 2 12:55:10.445111 kubelet[2792]: E0302 12:55:10.445014 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:10.455478 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 40180 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:55:10.458603 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:10.466381 systemd-logind[1549]: New session 22 of user core. Mar 2 12:55:10.475782 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 12:55:10.584223 sshd[4333]: Connection closed by 10.0.0.1 port 40180 Mar 2 12:55:10.584712 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:10.591703 systemd[1]: sshd@21-10.0.0.17:22-10.0.0.1:40180.service: Deactivated successfully. Mar 2 12:55:10.594848 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 12:55:10.596536 systemd-logind[1549]: Session 22 logged out. Waiting for processes to exit. Mar 2 12:55:10.599150 systemd-logind[1549]: Removed session 22. Mar 2 12:55:15.606711 systemd[1]: Started sshd@22-10.0.0.17:22-10.0.0.1:40196.service - OpenSSH per-connection server daemon (10.0.0.1:40196). Mar 2 12:55:15.687109 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 40196 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:55:15.689512 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:15.698813 systemd-logind[1549]: New session 23 of user core. Mar 2 12:55:15.707710 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 12:55:15.841408 sshd[4349]: Connection closed by 10.0.0.1 port 40196 Mar 2 12:55:15.842016 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:15.847915 systemd[1]: sshd@22-10.0.0.17:22-10.0.0.1:40196.service: Deactivated successfully. Mar 2 12:55:15.853377 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 12:55:15.859309 systemd-logind[1549]: Session 23 logged out. Waiting for processes to exit. Mar 2 12:55:15.861323 systemd-logind[1549]: Removed session 23. Mar 2 12:55:19.452038 kubelet[2792]: E0302 12:55:19.446132 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:20.877974 systemd[1]: Started sshd@23-10.0.0.17:22-10.0.0.1:59174.service - OpenSSH per-connection server daemon (10.0.0.1:59174). Mar 2 12:55:21.043824 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 59174 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:55:21.046561 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:21.072936 systemd-logind[1549]: New session 24 of user core. Mar 2 12:55:21.082983 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 12:55:21.291026 sshd[4367]: Connection closed by 10.0.0.1 port 59174 Mar 2 12:55:21.291703 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:21.305728 systemd[1]: sshd@23-10.0.0.17:22-10.0.0.1:59174.service: Deactivated successfully. Mar 2 12:55:21.308906 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 12:55:21.313871 systemd-logind[1549]: Session 24 logged out. Waiting for processes to exit. Mar 2 12:55:21.321010 systemd-logind[1549]: Removed session 24. Mar 2 12:55:24.469010 kubelet[2792]: E0302 12:55:24.467271 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:26.336031 systemd[1]: Started sshd@24-10.0.0.17:22-10.0.0.1:59180.service - OpenSSH per-connection server daemon (10.0.0.1:59180). Mar 2 12:55:26.501023 sshd[4381]: Accepted publickey for core from 10.0.0.1 port 59180 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:55:26.499191 sshd-session[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:26.533232 systemd-logind[1549]: New session 25 of user core. Mar 2 12:55:26.550362 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 12:55:27.004266 sshd[4384]: Connection closed by 10.0.0.1 port 59180 Mar 2 12:55:27.011024 sshd-session[4381]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:27.036755 systemd[1]: sshd@24-10.0.0.17:22-10.0.0.1:59180.service: Deactivated successfully. Mar 2 12:55:27.055916 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 12:55:27.068574 systemd-logind[1549]: Session 25 logged out. Waiting for processes to exit. Mar 2 12:55:27.072156 systemd-logind[1549]: Removed session 25. Mar 2 12:55:29.453275 kubelet[2792]: E0302 12:55:29.450333 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:32.101220 systemd[1]: Started sshd@25-10.0.0.17:22-10.0.0.1:43134.service - OpenSSH per-connection server daemon (10.0.0.1:43134). Mar 2 12:55:32.273476 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 43134 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:55:32.275332 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:32.305299 systemd-logind[1549]: New session 26 of user core. Mar 2 12:55:32.327035 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 12:55:32.922403 sshd[4400]: Connection closed by 10.0.0.1 port 43134 Mar 2 12:55:32.923937 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:32.946188 systemd[1]: sshd@25-10.0.0.17:22-10.0.0.1:43134.service: Deactivated successfully. Mar 2 12:55:32.957565 systemd-logind[1549]: Session 26 logged out. Waiting for processes to exit. Mar 2 12:55:32.963090 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 12:55:32.972053 systemd-logind[1549]: Removed session 26. Mar 2 12:55:37.973823 systemd[1]: Started sshd@26-10.0.0.17:22-10.0.0.1:43148.service - OpenSSH per-connection server daemon (10.0.0.1:43148). Mar 2 12:55:38.179230 sshd[4413]: Accepted publickey for core from 10.0.0.1 port 43148 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:55:38.188548 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:38.219489 systemd-logind[1549]: New session 27 of user core. Mar 2 12:55:38.233215 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 12:55:38.452494 kubelet[2792]: E0302 12:55:38.452243 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:55:38.610605 sshd[4416]: Connection closed by 10.0.0.1 port 43148 Mar 2 12:55:38.605728 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:38.620972 systemd[1]: sshd@26-10.0.0.17:22-10.0.0.1:43148.service: Deactivated successfully. Mar 2 12:55:38.625020 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 12:55:38.633119 systemd-logind[1549]: Session 27 logged out. Waiting for processes to exit. Mar 2 12:55:38.641231 systemd-logind[1549]: Removed session 27. Mar 2 12:55:43.669577 systemd[1]: Started sshd@27-10.0.0.17:22-10.0.0.1:52884.service - OpenSSH per-connection server daemon (10.0.0.1:52884). Mar 2 12:55:43.802577 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 52884 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:55:43.803619 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:43.824279 systemd-logind[1549]: New session 28 of user core. Mar 2 12:55:43.844920 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 2 12:55:44.089555 sshd[4434]: Connection closed by 10.0.0.1 port 52884 Mar 2 12:55:44.091101 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:44.100219 systemd[1]: sshd@27-10.0.0.17:22-10.0.0.1:52884.service: Deactivated successfully. Mar 2 12:55:44.106792 systemd[1]: session-28.scope: Deactivated successfully. Mar 2 12:55:44.112485 systemd-logind[1549]: Session 28 logged out. Waiting for processes to exit. Mar 2 12:55:44.117278 systemd-logind[1549]: Removed session 28. Mar 2 12:55:49.181184 systemd[1]: Started sshd@28-10.0.0.17:22-10.0.0.1:52896.service - OpenSSH per-connection server daemon (10.0.0.1:52896). Mar 2 12:55:49.424111 sshd[4450]: Accepted publickey for core from 10.0.0.1 port 52896 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:55:49.427487 sshd-session[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:49.454396 systemd-logind[1549]: New session 29 of user core. Mar 2 12:55:49.470007 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 2 12:55:50.184011 sshd[4453]: Connection closed by 10.0.0.1 port 52896 Mar 2 12:55:50.185262 sshd-session[4450]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:50.206636 systemd[1]: sshd@28-10.0.0.17:22-10.0.0.1:52896.service: Deactivated successfully. Mar 2 12:55:50.239361 systemd[1]: session-29.scope: Deactivated successfully. Mar 2 12:55:50.245826 systemd-logind[1549]: Session 29 logged out. Waiting for processes to exit. Mar 2 12:55:50.258392 systemd-logind[1549]: Removed session 29. Mar 2 12:55:55.228089 systemd[1]: Started sshd@29-10.0.0.17:22-10.0.0.1:49294.service - OpenSSH per-connection server daemon (10.0.0.1:49294). Mar 2 12:55:55.405383 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 49294 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:55:55.412603 sshd-session[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:55.430190 systemd-logind[1549]: New session 30 of user core. Mar 2 12:55:55.446346 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 2 12:55:55.956500 sshd[4470]: Connection closed by 10.0.0.1 port 49294 Mar 2 12:55:55.957537 sshd-session[4467]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:55.971964 systemd[1]: sshd@29-10.0.0.17:22-10.0.0.1:49294.service: Deactivated successfully. Mar 2 12:55:55.977336 systemd[1]: session-30.scope: Deactivated successfully. Mar 2 12:55:55.997098 systemd-logind[1549]: Session 30 logged out. Waiting for processes to exit. Mar 2 12:55:56.000823 systemd-logind[1549]: Removed session 30. Mar 2 12:55:58.460852 kubelet[2792]: E0302 12:55:58.460367 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:01.017965 systemd[1]: Started sshd@30-10.0.0.17:22-10.0.0.1:41964.service - OpenSSH per-connection server daemon (10.0.0.1:41964). Mar 2 12:56:01.303253 sshd[4483]: Accepted publickey for core from 10.0.0.1 port 41964 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:56:01.311564 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:56:01.385928 systemd-logind[1549]: New session 31 of user core. Mar 2 12:56:01.433394 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 2 12:56:01.849278 sshd[4486]: Connection closed by 10.0.0.1 port 41964 Mar 2 12:56:01.851785 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:01.886303 systemd[1]: sshd@30-10.0.0.17:22-10.0.0.1:41964.service: Deactivated successfully. Mar 2 12:56:01.895142 systemd[1]: session-31.scope: Deactivated successfully. Mar 2 12:56:01.901378 systemd-logind[1549]: Session 31 logged out. Waiting for processes to exit. Mar 2 12:56:01.910742 systemd-logind[1549]: Removed session 31. Mar 2 12:56:06.882325 systemd[1]: Started sshd@31-10.0.0.17:22-10.0.0.1:41974.service - OpenSSH per-connection server daemon (10.0.0.1:41974). Mar 2 12:56:07.045014 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 41974 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:56:07.049267 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:56:07.097504 systemd-logind[1549]: New session 32 of user core. Mar 2 12:56:07.129896 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 2 12:56:07.443661 sshd[4503]: Connection closed by 10.0.0.1 port 41974 Mar 2 12:56:07.451101 sshd-session[4500]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:07.486285 systemd[1]: sshd@31-10.0.0.17:22-10.0.0.1:41974.service: Deactivated successfully. Mar 2 12:56:07.509641 systemd[1]: session-32.scope: Deactivated successfully. Mar 2 12:56:07.532800 systemd-logind[1549]: Session 32 logged out. Waiting for processes to exit. Mar 2 12:56:07.547909 systemd-logind[1549]: Removed session 32. Mar 2 12:56:10.453330 kubelet[2792]: E0302 12:56:10.452909 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:12.567824 systemd[1]: Started sshd@32-10.0.0.17:22-10.0.0.1:38332.service - OpenSSH per-connection server daemon (10.0.0.1:38332). Mar 2 12:56:12.896759 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 38332 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:56:12.908922 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:56:12.963090 systemd-logind[1549]: New session 33 of user core. Mar 2 12:56:12.978364 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 2 12:56:13.408352 sshd[4519]: Connection closed by 10.0.0.1 port 38332 Mar 2 12:56:13.417598 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:13.447910 systemd[1]: sshd@32-10.0.0.17:22-10.0.0.1:38332.service: Deactivated successfully. Mar 2 12:56:13.460731 systemd[1]: session-33.scope: Deactivated successfully. Mar 2 12:56:13.464355 systemd-logind[1549]: Session 33 logged out. Waiting for processes to exit. Mar 2 12:56:13.508942 systemd-logind[1549]: Removed session 33. Mar 2 12:56:14.469816 kubelet[2792]: E0302 12:56:14.461546 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:18.483766 systemd[1]: Started sshd@33-10.0.0.17:22-10.0.0.1:38344.service - OpenSSH per-connection server daemon (10.0.0.1:38344). Mar 2 12:56:18.707501 sshd[4533]: Accepted publickey for core from 10.0.0.1 port 38344 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:56:18.710915 sshd-session[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:56:18.755803 systemd-logind[1549]: New session 34 of user core. Mar 2 12:56:18.777257 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 2 12:56:19.341808 sshd[4538]: Connection closed by 10.0.0.1 port 38344 Mar 2 12:56:19.340785 sshd-session[4533]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:19.371381 systemd[1]: sshd@33-10.0.0.17:22-10.0.0.1:38344.service: Deactivated successfully. Mar 2 12:56:19.383766 systemd[1]: session-34.scope: Deactivated successfully. Mar 2 12:56:19.407190 systemd-logind[1549]: Session 34 logged out. Waiting for processes to exit. Mar 2 12:56:19.434156 systemd-logind[1549]: Removed session 34. Mar 2 12:56:21.464183 kubelet[2792]: E0302 12:56:21.450741 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:23.454275 kubelet[2792]: E0302 12:56:23.445773 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:24.433348 systemd[1]: Started sshd@34-10.0.0.17:22-10.0.0.1:41922.service - OpenSSH per-connection server daemon (10.0.0.1:41922). Mar 2 12:56:24.654380 sshd[4551]: Accepted publickey for core from 10.0.0.1 port 41922 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:56:24.662917 sshd-session[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:56:24.692657 systemd-logind[1549]: New session 35 of user core. Mar 2 12:56:24.749240 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 2 12:56:25.350005 sshd[4554]: Connection closed by 10.0.0.1 port 41922 Mar 2 12:56:25.364806 sshd-session[4551]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:25.388990 systemd-logind[1549]: Session 35 logged out. Waiting for processes to exit. Mar 2 12:56:25.390729 systemd[1]: sshd@34-10.0.0.17:22-10.0.0.1:41922.service: Deactivated successfully. Mar 2 12:56:25.397560 systemd[1]: session-35.scope: Deactivated successfully. Mar 2 12:56:25.425065 systemd-logind[1549]: Removed session 35. Mar 2 12:56:30.434144 systemd[1]: Started sshd@35-10.0.0.17:22-10.0.0.1:60520.service - OpenSSH per-connection server daemon (10.0.0.1:60520). Mar 2 12:56:31.018326 sshd[4567]: Accepted publickey for core from 10.0.0.1 port 60520 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:56:31.022725 sshd-session[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:56:31.065274 systemd-logind[1549]: New session 36 of user core. Mar 2 12:56:31.120086 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 2 12:56:31.702294 sshd[4570]: Connection closed by 10.0.0.1 port 60520 Mar 2 12:56:31.702700 sshd-session[4567]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:31.742932 systemd[1]: sshd@35-10.0.0.17:22-10.0.0.1:60520.service: Deactivated successfully. Mar 2 12:56:31.773488 systemd[1]: session-36.scope: Deactivated successfully. Mar 2 12:56:31.790054 systemd-logind[1549]: Session 36 logged out. Waiting for processes to exit. Mar 2 12:56:31.811237 systemd[1]: Started sshd@36-10.0.0.17:22-10.0.0.1:60532.service - OpenSSH per-connection server daemon (10.0.0.1:60532). Mar 2 12:56:31.813971 systemd-logind[1549]: Removed session 36. Mar 2 12:56:32.160597 sshd[4583]: Accepted publickey for core from 10.0.0.1 port 60532 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:56:32.159972 sshd-session[4583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:56:32.261961 systemd-logind[1549]: New session 37 of user core. Mar 2 12:56:32.314821 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 2 12:56:32.448245 kubelet[2792]: E0302 12:56:32.447571 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:34.111052 sshd[4586]: Connection closed by 10.0.0.1 port 60532 Mar 2 12:56:34.110800 sshd-session[4583]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:34.197409 systemd[1]: sshd@36-10.0.0.17:22-10.0.0.1:60532.service: Deactivated successfully. Mar 2 12:56:34.218763 systemd[1]: session-37.scope: Deactivated successfully. Mar 2 12:56:34.227874 systemd-logind[1549]: Session 37 logged out. Waiting for processes to exit. Mar 2 12:56:34.244301 systemd[1]: Started sshd@37-10.0.0.17:22-10.0.0.1:60546.service - OpenSSH per-connection server daemon (10.0.0.1:60546). Mar 2 12:56:34.264046 systemd-logind[1549]: Removed session 37. Mar 2 12:56:34.471129 kubelet[2792]: E0302 12:56:34.451280 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:34.601249 sshd[4598]: Accepted publickey for core from 10.0.0.1 port 60546 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:56:34.604690 sshd-session[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:56:34.637294 systemd-logind[1549]: New session 38 of user core. Mar 2 12:56:34.662751 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 2 12:56:38.800833 sshd[4601]: Connection closed by 10.0.0.1 port 60546 Mar 2 12:56:38.822575 sshd-session[4598]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:38.863733 systemd[1]: sshd@37-10.0.0.17:22-10.0.0.1:60546.service: Deactivated successfully. Mar 2 12:56:38.876032 systemd[1]: session-38.scope: Deactivated successfully. Mar 2 12:56:38.876523 systemd[1]: session-38.scope: Consumed 1.038s CPU time, 45.1M memory peak. Mar 2 12:56:38.885101 systemd-logind[1549]: Session 38 logged out. Waiting for processes to exit. Mar 2 12:56:38.889723 systemd[1]: Started sshd@38-10.0.0.17:22-10.0.0.1:60554.service - OpenSSH per-connection server daemon (10.0.0.1:60554). Mar 2 12:56:38.991946 systemd-logind[1549]: Removed session 38. Mar 2 12:56:39.330388 sshd[4621]: Accepted publickey for core from 10.0.0.1 port 60554 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:56:39.338554 sshd-session[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:56:39.394926 systemd-logind[1549]: New session 39 of user core. Mar 2 12:56:39.410905 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 2 12:56:41.901066 sshd[4625]: Connection closed by 10.0.0.1 port 60554 Mar 2 12:56:41.914943 sshd-session[4621]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:41.992493 systemd[1]: sshd@38-10.0.0.17:22-10.0.0.1:60554.service: Deactivated successfully. Mar 2 12:56:42.016188 systemd[1]: session-39.scope: Deactivated successfully. Mar 2 12:56:42.033754 systemd-logind[1549]: Session 39 logged out. Waiting for processes to exit. Mar 2 12:56:42.069631 systemd[1]: Started sshd@39-10.0.0.17:22-10.0.0.1:44286.service - OpenSSH per-connection server daemon (10.0.0.1:44286). Mar 2 12:56:42.090530 systemd-logind[1549]: Removed session 39. Mar 2 12:56:42.482350 sshd[4640]: Accepted publickey for core from 10.0.0.1 port 44286 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:56:42.490790 sshd-session[4640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:56:42.540055 systemd-logind[1549]: New session 40 of user core. Mar 2 12:56:42.557773 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 2 12:56:42.901778 sshd[4646]: Connection closed by 10.0.0.1 port 44286 Mar 2 12:56:42.901391 sshd-session[4640]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:42.924510 systemd[1]: sshd@39-10.0.0.17:22-10.0.0.1:44286.service: Deactivated successfully. Mar 2 12:56:42.930928 systemd[1]: session-40.scope: Deactivated successfully. Mar 2 12:56:42.956094 systemd-logind[1549]: Session 40 logged out. Waiting for processes to exit. Mar 2 12:56:42.964774 systemd-logind[1549]: Removed session 40. Mar 2 12:56:44.500113 kubelet[2792]: E0302 12:56:44.494498 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:48.101024 systemd[1]: Started sshd@40-10.0.0.17:22-10.0.0.1:44292.service - OpenSSH per-connection server daemon (10.0.0.1:44292). Mar 2 12:56:48.452080 sshd[4661]: Accepted publickey for core from 10.0.0.1 port 44292 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:56:48.472276 sshd-session[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:56:48.507280 systemd-logind[1549]: New session 41 of user core. Mar 2 12:56:48.522841 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 2 12:56:49.186081 sshd[4666]: Connection closed by 10.0.0.1 port 44292 Mar 2 12:56:49.189083 sshd-session[4661]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:49.219379 systemd[1]: sshd@40-10.0.0.17:22-10.0.0.1:44292.service: Deactivated successfully. Mar 2 12:56:49.232360 systemd[1]: session-41.scope: Deactivated successfully. Mar 2 12:56:49.275961 systemd-logind[1549]: Session 41 logged out. Waiting for processes to exit. Mar 2 12:56:49.283007 systemd-logind[1549]: Removed session 41. Mar 2 12:56:54.279789 systemd[1]: Started sshd@41-10.0.0.17:22-10.0.0.1:54366.service - OpenSSH per-connection server daemon (10.0.0.1:54366). Mar 2 12:56:54.670832 sshd[4680]: Accepted publickey for core from 10.0.0.1 port 54366 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:56:54.691022 sshd-session[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:56:54.762746 systemd-logind[1549]: New session 42 of user core. Mar 2 12:56:54.790360 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 2 12:56:55.400318 sshd[4683]: Connection closed by 10.0.0.1 port 54366 Mar 2 12:56:55.401375 sshd-session[4680]: pam_unix(sshd:session): session closed for user core Mar 2 12:56:55.424072 systemd-logind[1549]: Session 42 logged out. Waiting for processes to exit. Mar 2 12:56:55.427692 systemd[1]: sshd@41-10.0.0.17:22-10.0.0.1:54366.service: Deactivated successfully. Mar 2 12:56:55.437007 systemd[1]: session-42.scope: Deactivated successfully. Mar 2 12:56:55.460183 systemd-logind[1549]: Removed session 42. Mar 2 12:57:00.476210 systemd[1]: Started sshd@42-10.0.0.17:22-10.0.0.1:57604.service - OpenSSH per-connection server daemon (10.0.0.1:57604). Mar 2 12:57:00.684960 sshd[4697]: Accepted publickey for core from 10.0.0.1 port 57604 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:57:00.693347 sshd-session[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:00.740175 systemd-logind[1549]: New session 43 of user core. Mar 2 12:57:00.759753 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 2 12:57:01.505982 sshd[4700]: Connection closed by 10.0.0.1 port 57604 Mar 2 12:57:01.507315 sshd-session[4697]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:01.542148 systemd[1]: sshd@42-10.0.0.17:22-10.0.0.1:57604.service: Deactivated successfully. Mar 2 12:57:01.562296 systemd[1]: session-43.scope: Deactivated successfully. Mar 2 12:57:01.578224 systemd-logind[1549]: Session 43 logged out. Waiting for processes to exit. Mar 2 12:57:01.586382 systemd-logind[1549]: Removed session 43. Mar 2 12:57:06.573119 systemd[1]: Started sshd@43-10.0.0.17:22-10.0.0.1:57608.service - OpenSSH per-connection server daemon (10.0.0.1:57608). Mar 2 12:57:06.834552 sshd[4713]: Accepted publickey for core from 10.0.0.1 port 57608 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:57:06.842245 sshd-session[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:06.891881 systemd-logind[1549]: New session 44 of user core. Mar 2 12:57:06.905097 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 2 12:57:07.540052 sshd[4716]: Connection closed by 10.0.0.1 port 57608 Mar 2 12:57:07.539045 sshd-session[4713]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:07.572837 systemd[1]: sshd@43-10.0.0.17:22-10.0.0.1:57608.service: Deactivated successfully. Mar 2 12:57:07.579098 systemd[1]: session-44.scope: Deactivated successfully. Mar 2 12:57:07.596725 systemd-logind[1549]: Session 44 logged out. Waiting for processes to exit. Mar 2 12:57:07.610761 systemd-logind[1549]: Removed session 44. Mar 2 12:57:12.602330 systemd[1]: Started sshd@44-10.0.0.17:22-10.0.0.1:52014.service - OpenSSH per-connection server daemon (10.0.0.1:52014). Mar 2 12:57:12.877290 sshd[4729]: Accepted publickey for core from 10.0.0.1 port 52014 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:57:12.881746 sshd-session[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:12.921848 systemd-logind[1549]: New session 45 of user core. Mar 2 12:57:12.947487 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 2 12:57:13.399517 sshd[4732]: Connection closed by 10.0.0.1 port 52014 Mar 2 12:57:13.397190 sshd-session[4729]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:13.420710 systemd[1]: sshd@44-10.0.0.17:22-10.0.0.1:52014.service: Deactivated successfully. Mar 2 12:57:13.433166 systemd[1]: session-45.scope: Deactivated successfully. Mar 2 12:57:13.449902 systemd-logind[1549]: Session 45 logged out. Waiting for processes to exit. Mar 2 12:57:13.463878 systemd-logind[1549]: Removed session 45. Mar 2 12:57:18.452868 kubelet[2792]: E0302 12:57:18.451595 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:18.468009 systemd[1]: Started sshd@45-10.0.0.17:22-10.0.0.1:52022.service - OpenSSH per-connection server daemon (10.0.0.1:52022). Mar 2 12:57:18.755409 sshd[4746]: Accepted publickey for core from 10.0.0.1 port 52022 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:57:18.766272 sshd-session[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:18.789124 systemd-logind[1549]: New session 46 of user core. Mar 2 12:57:18.804120 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 2 12:57:19.156619 sshd[4751]: Connection closed by 10.0.0.1 port 52022 Mar 2 12:57:19.159255 sshd-session[4746]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:19.170807 systemd[1]: sshd@45-10.0.0.17:22-10.0.0.1:52022.service: Deactivated successfully. Mar 2 12:57:19.177216 systemd[1]: session-46.scope: Deactivated successfully. Mar 2 12:57:19.189561 systemd-logind[1549]: Session 46 logged out. Waiting for processes to exit. Mar 2 12:57:19.196404 systemd-logind[1549]: Removed session 46. Mar 2 12:57:20.483862 kubelet[2792]: E0302 12:57:20.483369 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:24.254843 systemd[1]: Started sshd@46-10.0.0.17:22-10.0.0.1:48262.service - OpenSSH per-connection server daemon (10.0.0.1:48262). Mar 2 12:57:24.618294 sshd[4766]: Accepted publickey for core from 10.0.0.1 port 48262 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:57:24.628183 sshd-session[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:24.681325 systemd-logind[1549]: New session 47 of user core. Mar 2 12:57:24.697742 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 2 12:57:25.464585 sshd[4769]: Connection closed by 10.0.0.1 port 48262 Mar 2 12:57:25.470385 sshd-session[4766]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:25.510620 systemd[1]: sshd@46-10.0.0.17:22-10.0.0.1:48262.service: Deactivated successfully. Mar 2 12:57:25.519022 systemd[1]: session-47.scope: Deactivated successfully. Mar 2 12:57:25.532562 systemd-logind[1549]: Session 47 logged out. Waiting for processes to exit. Mar 2 12:57:25.544056 systemd-logind[1549]: Removed session 47. Mar 2 12:57:30.755582 systemd[1]: Started sshd@47-10.0.0.17:22-10.0.0.1:41202.service - OpenSSH per-connection server daemon (10.0.0.1:41202). Mar 2 12:57:31.094565 sshd[4783]: Accepted publickey for core from 10.0.0.1 port 41202 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:57:31.107865 sshd-session[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:31.181035 systemd-logind[1549]: New session 48 of user core. Mar 2 12:57:31.219529 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 2 12:57:32.336140 sshd[4786]: Connection closed by 10.0.0.1 port 41202 Mar 2 12:57:32.344088 sshd-session[4783]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:32.365733 systemd[1]: sshd@47-10.0.0.17:22-10.0.0.1:41202.service: Deactivated successfully. Mar 2 12:57:32.379279 systemd[1]: session-48.scope: Deactivated successfully. Mar 2 12:57:32.390334 systemd-logind[1549]: Session 48 logged out. Waiting for processes to exit. Mar 2 12:57:32.405890 systemd-logind[1549]: Removed session 48. Mar 2 12:57:34.281181 containerd[1568]: time="2026-03-02T12:57:34.277603065Z" level=warning msg="container event discarded" container=32534b5acdb498447e69e9bdcaa305fca8299a3d803a2ade84d8d40a3adf6387 type=CONTAINER_CREATED_EVENT Mar 2 12:57:34.281181 containerd[1568]: time="2026-03-02T12:57:34.278785479Z" level=warning msg="container event discarded" container=32534b5acdb498447e69e9bdcaa305fca8299a3d803a2ade84d8d40a3adf6387 type=CONTAINER_STARTED_EVENT Mar 2 12:57:34.302942 containerd[1568]: time="2026-03-02T12:57:34.302837276Z" level=warning msg="container event discarded" container=49e6a04d06bbd6427b2601fbd640665f72c8b7ccdb6228baebdc76cc6cf224e9 type=CONTAINER_CREATED_EVENT Mar 2 12:57:34.303202 containerd[1568]: time="2026-03-02T12:57:34.303070521Z" level=warning msg="container event discarded" container=49e6a04d06bbd6427b2601fbd640665f72c8b7ccdb6228baebdc76cc6cf224e9 type=CONTAINER_STARTED_EVENT Mar 2 12:57:34.303202 containerd[1568]: time="2026-03-02T12:57:34.303087333Z" level=warning msg="container event discarded" container=fb08821b4b5786e9424348e714cf9f5e6feb99d1f8ffc4804d198c06420aec4c type=CONTAINER_CREATED_EVENT Mar 2 12:57:34.303707 containerd[1568]: time="2026-03-02T12:57:34.303098804Z" level=warning msg="container event discarded" container=fb08821b4b5786e9424348e714cf9f5e6feb99d1f8ffc4804d198c06420aec4c type=CONTAINER_STARTED_EVENT Mar 2 12:57:34.303707 containerd[1568]: time="2026-03-02T12:57:34.303296883Z" level=warning msg="container event discarded" container=8da916fc7e2155746521816603b9668b065f19a1bec32a18620f37ae8b193ea6 type=CONTAINER_CREATED_EVENT Mar 2 12:57:34.325500 containerd[1568]: time="2026-03-02T12:57:34.319160272Z" level=warning msg="container event discarded" container=1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5 type=CONTAINER_CREATED_EVENT Mar 2 12:57:34.325500 containerd[1568]: time="2026-03-02T12:57:34.319202520Z" level=warning msg="container event discarded" container=fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac type=CONTAINER_CREATED_EVENT Mar 2 12:57:34.431538 containerd[1568]: time="2026-03-02T12:57:34.430490937Z" level=warning msg="container event discarded" container=8da916fc7e2155746521816603b9668b065f19a1bec32a18620f37ae8b193ea6 type=CONTAINER_STARTED_EVENT Mar 2 12:57:34.458003 containerd[1568]: time="2026-03-02T12:57:34.456659205Z" level=warning msg="container event discarded" container=1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5 type=CONTAINER_STARTED_EVENT Mar 2 12:57:34.495237 containerd[1568]: time="2026-03-02T12:57:34.495103583Z" level=warning msg="container event discarded" container=fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac type=CONTAINER_STARTED_EVENT Mar 2 12:57:37.419412 systemd[1]: Started sshd@48-10.0.0.17:22-10.0.0.1:41204.service - OpenSSH per-connection server daemon (10.0.0.1:41204). Mar 2 12:57:37.618845 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 41204 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:57:37.627535 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:37.678271 systemd-logind[1549]: New session 49 of user core. Mar 2 12:57:37.710031 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 2 12:57:38.163301 sshd[4803]: Connection closed by 10.0.0.1 port 41204 Mar 2 12:57:38.158588 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:38.171623 systemd[1]: sshd@48-10.0.0.17:22-10.0.0.1:41204.service: Deactivated successfully. Mar 2 12:57:38.184181 systemd[1]: session-49.scope: Deactivated successfully. Mar 2 12:57:38.188234 systemd-logind[1549]: Session 49 logged out. Waiting for processes to exit. Mar 2 12:57:38.196305 systemd-logind[1549]: Removed session 49. Mar 2 12:57:38.469247 kubelet[2792]: E0302 12:57:38.451263 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:43.203200 systemd[1]: Started sshd@49-10.0.0.17:22-10.0.0.1:39066.service - OpenSSH per-connection server daemon (10.0.0.1:39066). Mar 2 12:57:43.513571 sshd[4819]: Accepted publickey for core from 10.0.0.1 port 39066 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:57:43.516508 sshd-session[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:43.596069 systemd-logind[1549]: New session 50 of user core. Mar 2 12:57:43.609851 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 2 12:57:44.089660 sshd[4822]: Connection closed by 10.0.0.1 port 39066 Mar 2 12:57:44.093077 sshd-session[4819]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:44.119758 systemd[1]: sshd@49-10.0.0.17:22-10.0.0.1:39066.service: Deactivated successfully. Mar 2 12:57:44.133285 systemd[1]: session-50.scope: Deactivated successfully. Mar 2 12:57:44.139840 systemd-logind[1549]: Session 50 logged out. Waiting for processes to exit. Mar 2 12:57:44.157376 systemd-logind[1549]: Removed session 50. Mar 2 12:57:47.941577 containerd[1568]: time="2026-03-02T12:57:47.934494564Z" level=warning msg="container event discarded" container=ddf8439ff01531b5ed9ec402d3977db472c1179af14508566968f6a81abf7a69 type=CONTAINER_CREATED_EVENT Mar 2 12:57:47.941577 containerd[1568]: time="2026-03-02T12:57:47.934614097Z" level=warning msg="container event discarded" container=ddf8439ff01531b5ed9ec402d3977db472c1179af14508566968f6a81abf7a69 type=CONTAINER_STARTED_EVENT Mar 2 12:57:47.984602 containerd[1568]: time="2026-03-02T12:57:47.983581550Z" level=warning msg="container event discarded" container=21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd type=CONTAINER_CREATED_EVENT Mar 2 12:57:47.984602 containerd[1568]: time="2026-03-02T12:57:47.983730688Z" level=warning msg="container event discarded" container=21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd type=CONTAINER_STARTED_EVENT Mar 2 12:57:48.011630 containerd[1568]: time="2026-03-02T12:57:48.010036290Z" level=warning msg="container event discarded" container=1cf60f17ea449e39b86b2d0abcdcf20d27f1f38ab3da6245fda20c575b1435c4 type=CONTAINER_CREATED_EVENT Mar 2 12:57:48.146566 containerd[1568]: time="2026-03-02T12:57:48.145886865Z" level=warning msg="container event discarded" container=9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28 type=CONTAINER_CREATED_EVENT Mar 2 12:57:48.146566 containerd[1568]: time="2026-03-02T12:57:48.145968958Z" level=warning msg="container event discarded" container=9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28 type=CONTAINER_STARTED_EVENT Mar 2 12:57:48.146566 containerd[1568]: time="2026-03-02T12:57:48.145985839Z" level=warning msg="container event discarded" container=1cf60f17ea449e39b86b2d0abcdcf20d27f1f38ab3da6245fda20c575b1435c4 type=CONTAINER_STARTED_EVENT Mar 2 12:57:48.465327 kubelet[2792]: E0302 12:57:48.464402 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:49.210567 systemd[1]: Started sshd@50-10.0.0.17:22-10.0.0.1:39082.service - OpenSSH per-connection server daemon (10.0.0.1:39082). Mar 2 12:57:49.606183 sshd[4837]: Accepted publickey for core from 10.0.0.1 port 39082 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:57:49.616080 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:49.647238 systemd-logind[1549]: New session 51 of user core. Mar 2 12:57:49.687564 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 2 12:57:50.497625 sshd[4840]: Connection closed by 10.0.0.1 port 39082 Mar 2 12:57:50.557410 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:50.690368 systemd[1]: sshd@50-10.0.0.17:22-10.0.0.1:39082.service: Deactivated successfully. Mar 2 12:57:50.863895 systemd[1]: session-51.scope: Deactivated successfully. Mar 2 12:57:50.890058 systemd-logind[1549]: Session 51 logged out. Waiting for processes to exit. Mar 2 12:57:50.905138 systemd-logind[1549]: Removed session 51. Mar 2 12:57:51.467532 kubelet[2792]: E0302 12:57:51.467037 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:51.483575 kubelet[2792]: E0302 12:57:51.470955 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:51.483575 kubelet[2792]: E0302 12:57:51.471607 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:52.491396 kubelet[2792]: E0302 12:57:52.475555 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:54.886399 containerd[1568]: time="2026-03-02T12:57:54.886007822Z" level=warning msg="container event discarded" container=f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe type=CONTAINER_CREATED_EVENT Mar 2 12:57:55.029217 containerd[1568]: time="2026-03-02T12:57:55.028108990Z" level=warning msg="container event discarded" container=f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe type=CONTAINER_STARTED_EVENT Mar 2 12:57:55.567342 systemd[1]: Started sshd@51-10.0.0.17:22-10.0.0.1:59902.service - OpenSSH per-connection server daemon (10.0.0.1:59902). Mar 2 12:57:56.074211 sshd[4853]: Accepted publickey for core from 10.0.0.1 port 59902 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:57:56.071192 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:57:56.128320 systemd-logind[1549]: New session 52 of user core. Mar 2 12:57:56.178395 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 2 12:57:56.558343 sshd[4856]: Connection closed by 10.0.0.1 port 59902 Mar 2 12:57:56.559269 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:56.573557 systemd[1]: sshd@51-10.0.0.17:22-10.0.0.1:59902.service: Deactivated successfully. Mar 2 12:57:56.578946 systemd[1]: session-52.scope: Deactivated successfully. Mar 2 12:57:56.589229 systemd-logind[1549]: Session 52 logged out. Waiting for processes to exit. Mar 2 12:57:56.595099 systemd-logind[1549]: Removed session 52. Mar 2 12:58:01.646269 systemd[1]: Started sshd@52-10.0.0.17:22-10.0.0.1:46782.service - OpenSSH per-connection server daemon (10.0.0.1:46782). Mar 2 12:58:01.916351 sshd[4869]: Accepted publickey for core from 10.0.0.1 port 46782 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:58:01.925531 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:01.945782 systemd-logind[1549]: New session 53 of user core. Mar 2 12:58:01.976170 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 2 12:58:02.491176 sshd[4872]: Connection closed by 10.0.0.1 port 46782 Mar 2 12:58:02.490034 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:02.511233 systemd[1]: sshd@52-10.0.0.17:22-10.0.0.1:46782.service: Deactivated successfully. Mar 2 12:58:02.532173 systemd[1]: session-53.scope: Deactivated successfully. Mar 2 12:58:02.620206 systemd-logind[1549]: Session 53 logged out. Waiting for processes to exit. Mar 2 12:58:02.629564 systemd-logind[1549]: Removed session 53. Mar 2 12:58:07.549536 systemd[1]: Started sshd@53-10.0.0.17:22-10.0.0.1:46794.service - OpenSSH per-connection server daemon (10.0.0.1:46794). Mar 2 12:58:07.835048 sshd[4885]: Accepted publickey for core from 10.0.0.1 port 46794 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:58:07.845081 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:07.897563 systemd-logind[1549]: New session 54 of user core. Mar 2 12:58:07.913493 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 2 12:58:08.744964 sshd[4888]: Connection closed by 10.0.0.1 port 46794 Mar 2 12:58:08.772188 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:08.818180 systemd-logind[1549]: Session 54 logged out. Waiting for processes to exit. Mar 2 12:58:08.824343 systemd[1]: sshd@53-10.0.0.17:22-10.0.0.1:46794.service: Deactivated successfully. Mar 2 12:58:08.848706 systemd[1]: session-54.scope: Deactivated successfully. Mar 2 12:58:08.879607 systemd-logind[1549]: Removed session 54. Mar 2 12:58:13.865136 systemd[1]: Started sshd@54-10.0.0.17:22-10.0.0.1:56644.service - OpenSSH per-connection server daemon (10.0.0.1:56644). Mar 2 12:58:14.236243 sshd[4902]: Accepted publickey for core from 10.0.0.1 port 56644 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:58:14.238849 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:14.263788 systemd-logind[1549]: New session 55 of user core. Mar 2 12:58:14.287180 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 2 12:58:14.746836 sshd[4905]: Connection closed by 10.0.0.1 port 56644 Mar 2 12:58:14.774217 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:14.809799 systemd[1]: sshd@54-10.0.0.17:22-10.0.0.1:56644.service: Deactivated successfully. Mar 2 12:58:14.812872 systemd-logind[1549]: Session 55 logged out. Waiting for processes to exit. Mar 2 12:58:14.815117 systemd[1]: session-55.scope: Deactivated successfully. Mar 2 12:58:14.866389 systemd-logind[1549]: Removed session 55. Mar 2 12:58:15.356087 containerd[1568]: time="2026-03-02T12:58:15.352994929Z" level=warning msg="container event discarded" container=12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc type=CONTAINER_CREATED_EVENT Mar 2 12:58:15.510184 containerd[1568]: time="2026-03-02T12:58:15.509339143Z" level=warning msg="container event discarded" container=12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc type=CONTAINER_STARTED_EVENT Mar 2 12:58:15.855866 containerd[1568]: time="2026-03-02T12:58:15.855414309Z" level=warning msg="container event discarded" container=12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc type=CONTAINER_STOPPED_EVENT Mar 2 12:58:16.883653 containerd[1568]: time="2026-03-02T12:58:16.865090630Z" level=warning msg="container event discarded" container=84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b type=CONTAINER_CREATED_EVENT Mar 2 12:58:16.962358 containerd[1568]: time="2026-03-02T12:58:16.959357935Z" level=warning msg="container event discarded" container=84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b type=CONTAINER_STARTED_EVENT Mar 2 12:58:17.071273 containerd[1568]: time="2026-03-02T12:58:17.071059855Z" level=warning msg="container event discarded" container=84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b type=CONTAINER_STOPPED_EVENT Mar 2 12:58:17.882269 containerd[1568]: time="2026-03-02T12:58:17.882208726Z" level=warning msg="container event discarded" container=d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512 type=CONTAINER_CREATED_EVENT Mar 2 12:58:18.842589 containerd[1568]: time="2026-03-02T12:58:18.730112113Z" level=warning msg="container event discarded" container=d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512 type=CONTAINER_STARTED_EVENT Mar 2 12:58:19.695806 containerd[1568]: time="2026-03-02T12:58:19.693565632Z" level=warning msg="container event discarded" container=d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512 type=CONTAINER_STOPPED_EVENT Mar 2 12:58:19.695806 containerd[1568]: time="2026-03-02T12:58:19.695171748Z" level=warning msg="container event discarded" container=bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c type=CONTAINER_CREATED_EVENT Mar 2 12:58:19.695806 containerd[1568]: time="2026-03-02T12:58:19.695193689Z" level=warning msg="container event discarded" container=bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c type=CONTAINER_STARTED_EVENT Mar 2 12:58:19.695806 containerd[1568]: time="2026-03-02T12:58:19.695204038Z" level=warning msg="container event discarded" container=bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c type=CONTAINER_STOPPED_EVENT Mar 2 12:58:20.024969 containerd[1568]: time="2026-03-02T12:58:19.893953845Z" level=warning msg="container event discarded" container=45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490 type=CONTAINER_CREATED_EVENT Mar 2 12:58:21.142263 systemd[1]: Started sshd@55-10.0.0.17:22-10.0.0.1:56648.service - OpenSSH per-connection server daemon (10.0.0.1:56648). Mar 2 12:58:21.493889 containerd[1568]: time="2026-03-02T12:58:21.489562215Z" level=warning msg="container event discarded" container=45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490 type=CONTAINER_STARTED_EVENT Mar 2 12:58:21.769757 kubelet[2792]: E0302 12:58:21.748319 2792 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.237s" Mar 2 12:58:22.177521 sshd[4921]: Accepted publickey for core from 10.0.0.1 port 56648 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:58:22.196168 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:22.252272 systemd-logind[1549]: New session 56 of user core. Mar 2 12:58:22.280265 systemd[1]: Started session-56.scope - Session 56 of User core. Mar 2 12:58:22.843227 sshd[4924]: Connection closed by 10.0.0.1 port 56648 Mar 2 12:58:22.843748 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:22.890884 systemd[1]: sshd@55-10.0.0.17:22-10.0.0.1:56648.service: Deactivated successfully. Mar 2 12:58:22.912556 systemd[1]: session-56.scope: Deactivated successfully. Mar 2 12:58:22.917990 systemd-logind[1549]: Session 56 logged out. Waiting for processes to exit. Mar 2 12:58:22.924261 systemd-logind[1549]: Removed session 56. Mar 2 12:58:27.901379 systemd[1]: Started sshd@56-10.0.0.17:22-10.0.0.1:34360.service - OpenSSH per-connection server daemon (10.0.0.1:34360). Mar 2 12:58:28.080965 sshd[4938]: Accepted publickey for core from 10.0.0.1 port 34360 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:58:28.102623 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:28.193565 systemd-logind[1549]: New session 57 of user core. Mar 2 12:58:28.219280 containerd[1568]: time="2026-03-02T12:58:28.219082883Z" level=warning msg="container event discarded" container=9e54592e1349c92d87a086ef4a7f53641793e48a59390cbca49a2249cd332a7c type=CONTAINER_CREATED_EVENT Mar 2 12:58:28.219280 containerd[1568]: time="2026-03-02T12:58:28.219202737Z" level=warning msg="container event discarded" container=9e54592e1349c92d87a086ef4a7f53641793e48a59390cbca49a2249cd332a7c type=CONTAINER_STARTED_EVENT Mar 2 12:58:28.219280 containerd[1568]: time="2026-03-02T12:58:28.219217855Z" level=warning msg="container event discarded" container=0eae3cb0cb617968460bab3b737bb0553fb666fd0bbc178f1abb7462080737f8 type=CONTAINER_CREATED_EVENT Mar 2 12:58:28.219280 containerd[1568]: time="2026-03-02T12:58:28.219229337Z" level=warning msg="container event discarded" container=0eae3cb0cb617968460bab3b737bb0553fb666fd0bbc178f1abb7462080737f8 type=CONTAINER_STARTED_EVENT Mar 2 12:58:28.222620 systemd[1]: Started session-57.scope - Session 57 of User core. Mar 2 12:58:28.265366 containerd[1568]: time="2026-03-02T12:58:28.265284401Z" level=warning msg="container event discarded" container=9c085e26c437f225c2c0b4342ad7183a97327b23d23659be02fbe769dbad1d68 type=CONTAINER_CREATED_EVENT Mar 2 12:58:28.266125 containerd[1568]: time="2026-03-02T12:58:28.265767021Z" level=warning msg="container event discarded" container=47c4f9ffc88ba867ca87e0710a87076a3bc38f1e12901f082fdd18eec776d4c7 type=CONTAINER_CREATED_EVENT Mar 2 12:58:28.441353 containerd[1568]: time="2026-03-02T12:58:28.439859297Z" level=warning msg="container event discarded" container=47c4f9ffc88ba867ca87e0710a87076a3bc38f1e12901f082fdd18eec776d4c7 type=CONTAINER_STARTED_EVENT Mar 2 12:58:28.441353 containerd[1568]: time="2026-03-02T12:58:28.439934087Z" level=warning msg="container event discarded" container=9c085e26c437f225c2c0b4342ad7183a97327b23d23659be02fbe769dbad1d68 type=CONTAINER_STARTED_EVENT Mar 2 12:58:28.491055 kubelet[2792]: E0302 12:58:28.481702 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:29.003945 sshd[4941]: Connection closed by 10.0.0.1 port 34360 Mar 2 12:58:29.003329 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:29.026789 systemd[1]: sshd@56-10.0.0.17:22-10.0.0.1:34360.service: Deactivated successfully. Mar 2 12:58:29.039137 systemd[1]: session-57.scope: Deactivated successfully. Mar 2 12:58:29.048633 systemd-logind[1549]: Session 57 logged out. Waiting for processes to exit. Mar 2 12:58:29.066046 systemd-logind[1549]: Removed session 57. Mar 2 12:58:34.279118 systemd[1]: Started sshd@57-10.0.0.17:22-10.0.0.1:41470.service - OpenSSH per-connection server daemon (10.0.0.1:41470). Mar 2 12:58:34.824490 sshd[4954]: Accepted publickey for core from 10.0.0.1 port 41470 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:58:34.838628 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:34.910044 systemd-logind[1549]: New session 58 of user core. Mar 2 12:58:34.971704 systemd[1]: Started session-58.scope - Session 58 of User core. Mar 2 12:58:35.748396 sshd[4957]: Connection closed by 10.0.0.1 port 41470 Mar 2 12:58:35.749600 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:35.782681 systemd[1]: sshd@57-10.0.0.17:22-10.0.0.1:41470.service: Deactivated successfully. Mar 2 12:58:35.800834 systemd[1]: session-58.scope: Deactivated successfully. Mar 2 12:58:35.808251 systemd-logind[1549]: Session 58 logged out. Waiting for processes to exit. Mar 2 12:58:35.830671 systemd-logind[1549]: Removed session 58. Mar 2 12:58:37.469272 kubelet[2792]: E0302 12:58:37.467686 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:40.867284 systemd[1]: Started sshd@58-10.0.0.17:22-10.0.0.1:34404.service - OpenSSH per-connection server daemon (10.0.0.1:34404). Mar 2 12:58:41.494025 sshd[4971]: Accepted publickey for core from 10.0.0.1 port 34404 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:58:41.501523 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:41.570070 systemd-logind[1549]: New session 59 of user core. Mar 2 12:58:41.602404 systemd[1]: Started session-59.scope - Session 59 of User core. Mar 2 12:58:42.347083 sshd[4974]: Connection closed by 10.0.0.1 port 34404 Mar 2 12:58:42.344566 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:42.380760 systemd[1]: sshd@58-10.0.0.17:22-10.0.0.1:34404.service: Deactivated successfully. Mar 2 12:58:42.407132 systemd[1]: session-59.scope: Deactivated successfully. Mar 2 12:58:42.422984 systemd-logind[1549]: Session 59 logged out. Waiting for processes to exit. Mar 2 12:58:42.428126 systemd-logind[1549]: Removed session 59. Mar 2 12:58:47.426762 systemd[1]: Started sshd@59-10.0.0.17:22-10.0.0.1:34412.service - OpenSSH per-connection server daemon (10.0.0.1:34412). Mar 2 12:58:47.693038 sshd[4989]: Accepted publickey for core from 10.0.0.1 port 34412 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:58:47.705810 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:47.743658 systemd-logind[1549]: New session 60 of user core. Mar 2 12:58:47.795781 systemd[1]: Started session-60.scope - Session 60 of User core. Mar 2 12:58:48.560515 sshd[4992]: Connection closed by 10.0.0.1 port 34412 Mar 2 12:58:48.562691 sshd-session[4989]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:48.598581 systemd[1]: sshd@59-10.0.0.17:22-10.0.0.1:34412.service: Deactivated successfully. Mar 2 12:58:48.699894 systemd[1]: session-60.scope: Deactivated successfully. Mar 2 12:58:48.751304 systemd-logind[1549]: Session 60 logged out. Waiting for processes to exit. Mar 2 12:58:48.794294 systemd-logind[1549]: Removed session 60. Mar 2 12:58:53.648794 systemd[1]: Started sshd@60-10.0.0.17:22-10.0.0.1:54312.service - OpenSSH per-connection server daemon (10.0.0.1:54312). Mar 2 12:58:53.967189 sshd[5007]: Accepted publickey for core from 10.0.0.1 port 54312 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:58:53.973845 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:58:54.009810 systemd-logind[1549]: New session 61 of user core. Mar 2 12:58:54.030531 systemd[1]: Started session-61.scope - Session 61 of User core. Mar 2 12:58:54.478936 kubelet[2792]: E0302 12:58:54.472666 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:54.902789 sshd[5010]: Connection closed by 10.0.0.1 port 54312 Mar 2 12:58:54.903088 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Mar 2 12:58:54.932933 systemd[1]: sshd@60-10.0.0.17:22-10.0.0.1:54312.service: Deactivated successfully. Mar 2 12:58:54.951123 systemd[1]: session-61.scope: Deactivated successfully. Mar 2 12:58:54.977075 systemd-logind[1549]: Session 61 logged out. Waiting for processes to exit. Mar 2 12:58:54.996781 systemd-logind[1549]: Removed session 61. Mar 2 12:58:59.461754 kubelet[2792]: E0302 12:58:59.448765 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:59.921571 systemd[1]: Started sshd@61-10.0.0.17:22-10.0.0.1:54326.service - OpenSSH per-connection server daemon (10.0.0.1:54326). Mar 2 12:59:00.211694 sshd[5025]: Accepted publickey for core from 10.0.0.1 port 54326 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:59:00.206645 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:00.250785 systemd-logind[1549]: New session 62 of user core. Mar 2 12:59:00.290250 systemd[1]: Started session-62.scope - Session 62 of User core. Mar 2 12:59:01.843560 sshd[5028]: Connection closed by 10.0.0.1 port 54326 Mar 2 12:59:01.887805 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:01.961092 systemd[1]: sshd@61-10.0.0.17:22-10.0.0.1:54326.service: Deactivated successfully. Mar 2 12:59:02.021240 systemd[1]: session-62.scope: Deactivated successfully. Mar 2 12:59:02.056570 systemd-logind[1549]: Session 62 logged out. Waiting for processes to exit. Mar 2 12:59:02.111769 systemd-logind[1549]: Removed session 62. Mar 2 12:59:05.523381 kubelet[2792]: E0302 12:59:05.522708 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:06.461491 kubelet[2792]: E0302 12:59:06.460505 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:06.913815 systemd[1]: Started sshd@62-10.0.0.17:22-10.0.0.1:50366.service - OpenSSH per-connection server daemon (10.0.0.1:50366). Mar 2 12:59:07.255565 sshd[5041]: Accepted publickey for core from 10.0.0.1 port 50366 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:59:07.258746 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:07.301565 systemd-logind[1549]: New session 63 of user core. Mar 2 12:59:07.358184 systemd[1]: Started session-63.scope - Session 63 of User core. Mar 2 12:59:08.152667 sshd[5044]: Connection closed by 10.0.0.1 port 50366 Mar 2 12:59:08.153676 sshd-session[5041]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:08.263896 systemd[1]: sshd@62-10.0.0.17:22-10.0.0.1:50366.service: Deactivated successfully. Mar 2 12:59:08.294846 systemd[1]: session-63.scope: Deactivated successfully. Mar 2 12:59:08.305828 systemd-logind[1549]: Session 63 logged out. Waiting for processes to exit. Mar 2 12:59:08.319652 systemd-logind[1549]: Removed session 63. Mar 2 12:59:10.452700 kubelet[2792]: E0302 12:59:10.447599 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:13.238714 systemd[1]: Started sshd@63-10.0.0.17:22-10.0.0.1:43958.service - OpenSSH per-connection server daemon (10.0.0.1:43958). Mar 2 12:59:14.046813 sshd[5057]: Accepted publickey for core from 10.0.0.1 port 43958 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:59:14.065831 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:14.123668 systemd-logind[1549]: New session 64 of user core. Mar 2 12:59:14.201620 systemd[1]: Started session-64.scope - Session 64 of User core. Mar 2 12:59:15.034071 sshd[5060]: Connection closed by 10.0.0.1 port 43958 Mar 2 12:59:15.070027 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:15.132094 systemd[1]: sshd@63-10.0.0.17:22-10.0.0.1:43958.service: Deactivated successfully. Mar 2 12:59:15.155629 systemd[1]: session-64.scope: Deactivated successfully. Mar 2 12:59:15.168879 systemd-logind[1549]: Session 64 logged out. Waiting for processes to exit. Mar 2 12:59:15.184573 systemd-logind[1549]: Removed session 64. Mar 2 12:59:20.022332 systemd[1]: Started sshd@64-10.0.0.17:22-10.0.0.1:43968.service - OpenSSH per-connection server daemon (10.0.0.1:43968). Mar 2 12:59:20.278704 sshd[5075]: Accepted publickey for core from 10.0.0.1 port 43968 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:59:20.280865 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:20.318543 systemd-logind[1549]: New session 65 of user core. Mar 2 12:59:20.335342 systemd[1]: Started session-65.scope - Session 65 of User core. Mar 2 12:59:20.931908 sshd[5078]: Connection closed by 10.0.0.1 port 43968 Mar 2 12:59:20.931332 sshd-session[5075]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:20.964956 systemd[1]: sshd@64-10.0.0.17:22-10.0.0.1:43968.service: Deactivated successfully. Mar 2 12:59:20.971298 systemd[1]: session-65.scope: Deactivated successfully. Mar 2 12:59:20.992717 systemd-logind[1549]: Session 65 logged out. Waiting for processes to exit. Mar 2 12:59:21.017660 systemd-logind[1549]: Removed session 65. Mar 2 12:59:23.110730 kubelet[2792]: E0302 12:59:23.085526 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:26.060949 systemd[1]: Started sshd@65-10.0.0.17:22-10.0.0.1:38128.service - OpenSSH per-connection server daemon (10.0.0.1:38128). Mar 2 12:59:26.881230 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 38128 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:59:27.538200 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:28.328782 systemd-logind[1549]: New session 66 of user core. Mar 2 12:59:28.348593 systemd[1]: Started session-66.scope - Session 66 of User core. Mar 2 12:59:31.759628 kubelet[2792]: E0302 12:59:31.751227 2792 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.304s" Mar 2 12:59:44.483709 systemd[1]: cri-containerd-1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5.scope: Deactivated successfully. Mar 2 12:59:44.492361 systemd[1]: cri-containerd-1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5.scope: Consumed 23.795s CPU time, 67.6M memory peak, 6.1M read from disk. Mar 2 12:59:45.594125 sshd[5094]: Connection closed by 10.0.0.1 port 38128 Mar 2 12:59:45.598784 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:45.626006 systemd[1]: sshd@65-10.0.0.17:22-10.0.0.1:38128.service: Deactivated successfully. Mar 2 12:59:45.635721 systemd[1]: session-66.scope: Deactivated successfully. Mar 2 12:59:45.637378 containerd[1568]: time="2026-03-02T12:59:45.637033783Z" level=info msg="received container exit event container_id:\"1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5\" id:\"1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5\" pid:2640 exit_status:1 exited_at:{seconds:1772456385 nanos:624933112}" Mar 2 12:59:45.642216 systemd-logind[1549]: Session 66 logged out. Waiting for processes to exit. Mar 2 12:59:45.645027 systemd-logind[1549]: Removed session 66. Mar 2 12:59:45.721102 systemd[1]: cri-containerd-f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe.scope: Deactivated successfully. Mar 2 12:59:45.721825 systemd[1]: cri-containerd-f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe.scope: Consumed 3.905s CPU time, 26.8M memory peak, 544K read from disk, 4K written to disk. Mar 2 12:59:46.124705 containerd[1568]: time="2026-03-02T12:59:46.124225875Z" level=info msg="received container exit event container_id:\"f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe\" id:\"f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe\" pid:3213 exit_status:1 exited_at:{seconds:1772456386 nanos:109382501}" Mar 2 12:59:46.141472 kubelet[2792]: E0302 12:59:46.140830 2792 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.93s" Mar 2 12:59:46.171017 kubelet[2792]: E0302 12:59:46.170040 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:46.201045 systemd[1]: cri-containerd-fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac.scope: Deactivated successfully. Mar 2 12:59:46.205822 systemd[1]: cri-containerd-fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac.scope: Consumed 9.447s CPU time, 25.2M memory peak, 516K read from disk. Mar 2 12:59:46.285861 containerd[1568]: time="2026-03-02T12:59:46.285653781Z" level=info msg="received container exit event container_id:\"fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac\" id:\"fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac\" pid:2649 exit_status:1 exited_at:{seconds:1772456386 nanos:248822492}" Mar 2 12:59:46.640507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5-rootfs.mount: Deactivated successfully. Mar 2 12:59:46.672670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe-rootfs.mount: Deactivated successfully. Mar 2 12:59:46.757600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac-rootfs.mount: Deactivated successfully. Mar 2 12:59:47.750016 kubelet[2792]: I0302 12:59:47.745279 2792 scope.go:117] "RemoveContainer" containerID="f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe" Mar 2 12:59:47.750016 kubelet[2792]: E0302 12:59:47.745622 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:47.819403 kubelet[2792]: I0302 12:59:47.818105 2792 scope.go:117] "RemoveContainer" containerID="fd8c29e2574ec43b238bb749b690a8d8e68ab56cc0e4f52dc06b15dfc97c92ac" Mar 2 12:59:47.819403 kubelet[2792]: E0302 12:59:47.840114 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:47.884286 containerd[1568]: time="2026-03-02T12:59:47.879956956Z" level=info msg="CreateContainer within sandbox \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Mar 2 12:59:47.951067 containerd[1568]: time="2026-03-02T12:59:47.950848846Z" level=info msg="CreateContainer within sandbox \"fb08821b4b5786e9424348e714cf9f5e6feb99d1f8ffc4804d198c06420aec4c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 2 12:59:47.951782 kubelet[2792]: I0302 12:59:47.951672 2792 scope.go:117] "RemoveContainer" containerID="1855f95ebdea81c06e5a7f3e76c21e9f01f055b770640aa9507994592437b4e5" Mar 2 12:59:47.958782 kubelet[2792]: E0302 12:59:47.958348 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:48.059945 containerd[1568]: time="2026-03-02T12:59:48.047899885Z" level=info msg="CreateContainer within sandbox \"49e6a04d06bbd6427b2601fbd640665f72c8b7ccdb6228baebdc76cc6cf224e9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 2 12:59:48.215812 containerd[1568]: time="2026-03-02T12:59:48.215727118Z" level=info msg="Container 08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:59:48.229512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount394568008.mount: Deactivated successfully. Mar 2 12:59:48.244731 containerd[1568]: time="2026-03-02T12:59:48.244669751Z" level=info msg="Container ae840ac2aa3df98869be96b63ce8df7d1d28802a039ec334546d0a06b4641892: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:59:48.292519 containerd[1568]: time="2026-03-02T12:59:48.291729017Z" level=info msg="Container 25ffd5e5d69e7e9f916ecdf9f1de72f82d3acc7f6b1b0c08a91ca8d04ed4ce86: CDI devices from CRI Config.CDIDevices: []" Mar 2 12:59:48.331865 containerd[1568]: time="2026-03-02T12:59:48.331601404Z" level=info msg="CreateContainer within sandbox \"fb08821b4b5786e9424348e714cf9f5e6feb99d1f8ffc4804d198c06420aec4c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ae840ac2aa3df98869be96b63ce8df7d1d28802a039ec334546d0a06b4641892\"" Mar 2 12:59:48.332999 containerd[1568]: time="2026-03-02T12:59:48.332626073Z" level=info msg="CreateContainer within sandbox \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23\"" Mar 2 12:59:48.346506 containerd[1568]: time="2026-03-02T12:59:48.343226075Z" level=info msg="StartContainer for \"08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23\"" Mar 2 12:59:48.370600 containerd[1568]: time="2026-03-02T12:59:48.370547247Z" level=info msg="StartContainer for \"ae840ac2aa3df98869be96b63ce8df7d1d28802a039ec334546d0a06b4641892\"" Mar 2 12:59:48.373626 containerd[1568]: time="2026-03-02T12:59:48.373588341Z" level=info msg="connecting to shim 08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23" address="unix:///run/containerd/s/58296dd73fdb9b84a968dc1c643c58d09f5bfa2474f5aeed6feb5f0dd309daff" protocol=ttrpc version=3 Mar 2 12:59:48.373897 containerd[1568]: time="2026-03-02T12:59:48.373864735Z" level=info msg="connecting to shim ae840ac2aa3df98869be96b63ce8df7d1d28802a039ec334546d0a06b4641892" address="unix:///run/containerd/s/18f9d74f9407104bb638b12e1ed92542db609168ecbd25629add237d0822b15b" protocol=ttrpc version=3 Mar 2 12:59:48.397289 containerd[1568]: time="2026-03-02T12:59:48.393303445Z" level=info msg="CreateContainer within sandbox \"49e6a04d06bbd6427b2601fbd640665f72c8b7ccdb6228baebdc76cc6cf224e9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"25ffd5e5d69e7e9f916ecdf9f1de72f82d3acc7f6b1b0c08a91ca8d04ed4ce86\"" Mar 2 12:59:48.411194 containerd[1568]: time="2026-03-02T12:59:48.401204333Z" level=info msg="StartContainer for \"25ffd5e5d69e7e9f916ecdf9f1de72f82d3acc7f6b1b0c08a91ca8d04ed4ce86\"" Mar 2 12:59:48.435350 containerd[1568]: time="2026-03-02T12:59:48.432914530Z" level=info msg="connecting to shim 25ffd5e5d69e7e9f916ecdf9f1de72f82d3acc7f6b1b0c08a91ca8d04ed4ce86" address="unix:///run/containerd/s/e567645b14c3dbc92dbe6873afdd39a175cfb93bddc530bb5b42a85f0d61d2ba" protocol=ttrpc version=3 Mar 2 12:59:48.585482 systemd[1]: Started cri-containerd-08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23.scope - libcontainer container 08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23. Mar 2 12:59:48.602858 systemd[1]: Started cri-containerd-ae840ac2aa3df98869be96b63ce8df7d1d28802a039ec334546d0a06b4641892.scope - libcontainer container ae840ac2aa3df98869be96b63ce8df7d1d28802a039ec334546d0a06b4641892. Mar 2 12:59:48.639391 systemd[1]: Started cri-containerd-25ffd5e5d69e7e9f916ecdf9f1de72f82d3acc7f6b1b0c08a91ca8d04ed4ce86.scope - libcontainer container 25ffd5e5d69e7e9f916ecdf9f1de72f82d3acc7f6b1b0c08a91ca8d04ed4ce86. Mar 2 12:59:48.847795 containerd[1568]: time="2026-03-02T12:59:48.846816447Z" level=info msg="StartContainer for \"08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23\" returns successfully" Mar 2 12:59:49.111311 kubelet[2792]: E0302 12:59:49.107014 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:49.175315 containerd[1568]: time="2026-03-02T12:59:49.173830045Z" level=info msg="StartContainer for \"ae840ac2aa3df98869be96b63ce8df7d1d28802a039ec334546d0a06b4641892\" returns successfully" Mar 2 12:59:49.323358 containerd[1568]: time="2026-03-02T12:59:49.323305980Z" level=info msg="StartContainer for \"25ffd5e5d69e7e9f916ecdf9f1de72f82d3acc7f6b1b0c08a91ca8d04ed4ce86\" returns successfully" Mar 2 12:59:50.203330 kubelet[2792]: E0302 12:59:50.194903 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:50.219800 kubelet[2792]: E0302 12:59:50.219040 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:50.651324 systemd[1]: Started sshd@66-10.0.0.17:22-10.0.0.1:42820.service - OpenSSH per-connection server daemon (10.0.0.1:42820). Mar 2 12:59:51.211665 sshd[5253]: Accepted publickey for core from 10.0.0.1 port 42820 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:59:51.220899 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:51.245518 kubelet[2792]: E0302 12:59:51.239567 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:51.307006 systemd-logind[1549]: New session 67 of user core. Mar 2 12:59:51.324773 systemd[1]: Started session-67.scope - Session 67 of User core. Mar 2 12:59:51.955963 sshd[5257]: Connection closed by 10.0.0.1 port 42820 Mar 2 12:59:51.966618 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:52.001521 systemd[1]: sshd@66-10.0.0.17:22-10.0.0.1:42820.service: Deactivated successfully. Mar 2 12:59:52.013735 systemd[1]: session-67.scope: Deactivated successfully. Mar 2 12:59:52.032786 systemd-logind[1549]: Session 67 logged out. Waiting for processes to exit. Mar 2 12:59:52.046350 systemd-logind[1549]: Removed session 67. Mar 2 12:59:52.295619 kubelet[2792]: E0302 12:59:52.282229 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:53.627484 kubelet[2792]: E0302 12:59:53.622161 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:53.689267 kubelet[2792]: E0302 12:59:53.689095 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:57.006082 systemd[1]: Started sshd@67-10.0.0.17:22-10.0.0.1:42836.service - OpenSSH per-connection server daemon (10.0.0.1:42836). Mar 2 12:59:57.256198 sshd[5274]: Accepted publickey for core from 10.0.0.1 port 42836 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 12:59:57.264380 sshd-session[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:57.316109 systemd-logind[1549]: New session 68 of user core. Mar 2 12:59:57.330779 systemd[1]: Started session-68.scope - Session 68 of User core. Mar 2 12:59:57.775367 sshd[5277]: Connection closed by 10.0.0.1 port 42836 Mar 2 12:59:57.776357 sshd-session[5274]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:57.792714 systemd[1]: sshd@67-10.0.0.17:22-10.0.0.1:42836.service: Deactivated successfully. Mar 2 12:59:57.802055 systemd[1]: session-68.scope: Deactivated successfully. Mar 2 12:59:57.809039 systemd-logind[1549]: Session 68 logged out. Waiting for processes to exit. Mar 2 12:59:57.829873 systemd-logind[1549]: Removed session 68. Mar 2 13:00:01.471414 kubelet[2792]: E0302 13:00:01.446500 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:02.841568 systemd[1]: Started sshd@68-10.0.0.17:22-10.0.0.1:53800.service - OpenSSH per-connection server daemon (10.0.0.1:53800). Mar 2 13:00:03.253027 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 53800 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:00:03.289931 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:03.389840 systemd-logind[1549]: New session 69 of user core. Mar 2 13:00:03.410578 systemd[1]: Started session-69.scope - Session 69 of User core. Mar 2 13:00:03.707849 kubelet[2792]: E0302 13:00:03.702562 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:03.829504 kubelet[2792]: E0302 13:00:03.827157 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:04.107377 sshd[5293]: Connection closed by 10.0.0.1 port 53800 Mar 2 13:00:04.109912 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:04.155872 systemd[1]: sshd@68-10.0.0.17:22-10.0.0.1:53800.service: Deactivated successfully. Mar 2 13:00:04.181367 systemd[1]: session-69.scope: Deactivated successfully. Mar 2 13:00:04.187046 systemd-logind[1549]: Session 69 logged out. Waiting for processes to exit. Mar 2 13:00:04.190985 systemd-logind[1549]: Removed session 69. Mar 2 13:00:04.519411 kubelet[2792]: E0302 13:00:04.514059 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:09.167087 systemd[1]: Started sshd@69-10.0.0.17:22-10.0.0.1:53808.service - OpenSSH per-connection server daemon (10.0.0.1:53808). Mar 2 13:00:09.539124 sshd[5306]: Accepted publickey for core from 10.0.0.1 port 53808 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:00:09.544538 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:09.594501 systemd-logind[1549]: New session 70 of user core. Mar 2 13:00:09.627748 systemd[1]: Started session-70.scope - Session 70 of User core. Mar 2 13:00:10.395509 sshd[5309]: Connection closed by 10.0.0.1 port 53808 Mar 2 13:00:10.393752 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:10.401018 systemd[1]: sshd@69-10.0.0.17:22-10.0.0.1:53808.service: Deactivated successfully. Mar 2 13:00:10.408257 systemd[1]: session-70.scope: Deactivated successfully. Mar 2 13:00:10.421275 systemd-logind[1549]: Session 70 logged out. Waiting for processes to exit. Mar 2 13:00:10.438084 systemd-logind[1549]: Removed session 70. Mar 2 13:00:10.447212 kubelet[2792]: E0302 13:00:10.446148 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:15.538655 systemd[1]: Started sshd@70-10.0.0.17:22-10.0.0.1:39810.service - OpenSSH per-connection server daemon (10.0.0.1:39810). Mar 2 13:00:15.850027 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 39810 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:00:15.849497 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:15.914607 systemd-logind[1549]: New session 71 of user core. Mar 2 13:00:15.935953 systemd[1]: Started session-71.scope - Session 71 of User core. Mar 2 13:00:16.718629 kubelet[2792]: E0302 13:00:16.696016 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:18.544924 sshd[5325]: Connection closed by 10.0.0.1 port 39810 Mar 2 13:00:18.639617 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:18.676560 systemd[1]: sshd@70-10.0.0.17:22-10.0.0.1:39810.service: Deactivated successfully. Mar 2 13:00:18.745091 systemd[1]: session-71.scope: Deactivated successfully. Mar 2 13:00:18.777886 systemd-logind[1549]: Session 71 logged out. Waiting for processes to exit. Mar 2 13:00:18.804562 systemd-logind[1549]: Removed session 71. Mar 2 13:00:23.614733 systemd[1]: Started sshd@71-10.0.0.17:22-10.0.0.1:32930.service - OpenSSH per-connection server daemon (10.0.0.1:32930). Mar 2 13:00:23.850842 sshd[5341]: Accepted publickey for core from 10.0.0.1 port 32930 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:00:23.869087 sshd-session[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:23.909303 systemd-logind[1549]: New session 72 of user core. Mar 2 13:00:23.924042 systemd[1]: Started session-72.scope - Session 72 of User core. Mar 2 13:00:24.675515 sshd[5346]: Connection closed by 10.0.0.1 port 32930 Mar 2 13:00:24.676346 sshd-session[5341]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:24.685858 systemd[1]: sshd@71-10.0.0.17:22-10.0.0.1:32930.service: Deactivated successfully. Mar 2 13:00:24.690811 systemd[1]: session-72.scope: Deactivated successfully. Mar 2 13:00:24.697051 systemd-logind[1549]: Session 72 logged out. Waiting for processes to exit. Mar 2 13:00:24.701670 systemd-logind[1549]: Removed session 72. Mar 2 13:00:27.445406 kubelet[2792]: E0302 13:00:27.445218 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:29.817854 systemd[1]: Started sshd@72-10.0.0.17:22-10.0.0.1:32938.service - OpenSSH per-connection server daemon (10.0.0.1:32938). Mar 2 13:00:30.168054 sshd[5359]: Accepted publickey for core from 10.0.0.1 port 32938 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:00:30.174398 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:30.216600 systemd-logind[1549]: New session 73 of user core. Mar 2 13:00:30.249023 systemd[1]: Started session-73.scope - Session 73 of User core. Mar 2 13:00:30.715562 sshd[5362]: Connection closed by 10.0.0.1 port 32938 Mar 2 13:00:30.714367 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:30.728346 systemd[1]: sshd@72-10.0.0.17:22-10.0.0.1:32938.service: Deactivated successfully. Mar 2 13:00:30.744723 systemd[1]: session-73.scope: Deactivated successfully. Mar 2 13:00:30.757116 systemd-logind[1549]: Session 73 logged out. Waiting for processes to exit. Mar 2 13:00:30.792850 systemd-logind[1549]: Removed session 73. Mar 2 13:00:35.832109 systemd[1]: Started sshd@73-10.0.0.17:22-10.0.0.1:36224.service - OpenSSH per-connection server daemon (10.0.0.1:36224). Mar 2 13:00:36.304772 sshd[5376]: Accepted publickey for core from 10.0.0.1 port 36224 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:00:36.324838 sshd-session[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:36.393240 systemd-logind[1549]: New session 74 of user core. Mar 2 13:00:36.471114 systemd[1]: Started session-74.scope - Session 74 of User core. Mar 2 13:00:37.185642 sshd[5381]: Connection closed by 10.0.0.1 port 36224 Mar 2 13:00:37.190933 sshd-session[5376]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:37.208133 systemd-logind[1549]: Session 74 logged out. Waiting for processes to exit. Mar 2 13:00:37.224836 systemd[1]: sshd@73-10.0.0.17:22-10.0.0.1:36224.service: Deactivated successfully. Mar 2 13:00:37.236266 systemd[1]: session-74.scope: Deactivated successfully. Mar 2 13:00:37.283399 systemd-logind[1549]: Removed session 74. Mar 2 13:00:42.242199 systemd[1]: Started sshd@74-10.0.0.17:22-10.0.0.1:38642.service - OpenSSH per-connection server daemon (10.0.0.1:38642). Mar 2 13:00:42.633899 sshd[5394]: Accepted publickey for core from 10.0.0.1 port 38642 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:00:42.640329 sshd-session[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:42.682046 systemd-logind[1549]: New session 75 of user core. Mar 2 13:00:42.723472 systemd[1]: Started session-75.scope - Session 75 of User core. Mar 2 13:00:43.345870 sshd[5399]: Connection closed by 10.0.0.1 port 38642 Mar 2 13:00:43.346867 sshd-session[5394]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:43.386796 systemd[1]: sshd@74-10.0.0.17:22-10.0.0.1:38642.service: Deactivated successfully. Mar 2 13:00:43.411295 systemd[1]: session-75.scope: Deactivated successfully. Mar 2 13:00:43.422717 systemd-logind[1549]: Session 75 logged out. Waiting for processes to exit. Mar 2 13:00:43.457378 systemd[1]: Started sshd@75-10.0.0.17:22-10.0.0.1:38644.service - OpenSSH per-connection server daemon (10.0.0.1:38644). Mar 2 13:00:43.466115 systemd-logind[1549]: Removed session 75. Mar 2 13:00:43.721659 sshd[5412]: Accepted publickey for core from 10.0.0.1 port 38644 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:00:43.723253 sshd-session[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:43.806677 systemd-logind[1549]: New session 76 of user core. Mar 2 13:00:43.822063 systemd[1]: Started session-76.scope - Session 76 of User core. Mar 2 13:00:49.480524 containerd[1568]: time="2026-03-02T13:00:49.480323921Z" level=info msg="StopContainer for \"08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23\" with timeout 30 (s)" Mar 2 13:00:49.508075 containerd[1568]: time="2026-03-02T13:00:49.506228258Z" level=info msg="Stop container \"08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23\" with signal terminated" Mar 2 13:00:49.993567 systemd[1]: cri-containerd-08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23.scope: Deactivated successfully. Mar 2 13:00:49.994220 systemd[1]: cri-containerd-08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23.scope: Consumed 822ms CPU time, 28.5M memory peak, 1.1M read from disk, 4K written to disk. Mar 2 13:00:50.020912 containerd[1568]: time="2026-03-02T13:00:50.020692693Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:00:50.021498 containerd[1568]: time="2026-03-02T13:00:50.021226370Z" level=info msg="received container exit event container_id:\"08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23\" id:\"08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23\" pid:5189 exited_at:{seconds:1772456450 nanos:16208902}" Mar 2 13:00:50.063851 containerd[1568]: time="2026-03-02T13:00:50.061264936Z" level=info msg="StopContainer for \"45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490\" with timeout 2 (s)" Mar 2 13:00:50.083107 containerd[1568]: time="2026-03-02T13:00:50.082223741Z" level=info msg="Stop container \"45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490\" with signal terminated" Mar 2 13:00:50.293105 systemd-networkd[1479]: lxc_health: Link DOWN Mar 2 13:00:50.293916 systemd-networkd[1479]: lxc_health: Lost carrier Mar 2 13:00:50.419288 sshd[5415]: Connection closed by 10.0.0.1 port 38644 Mar 2 13:00:50.431782 sshd-session[5412]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:50.510531 systemd[1]: sshd@75-10.0.0.17:22-10.0.0.1:38644.service: Deactivated successfully. Mar 2 13:00:50.589409 systemd[1]: session-76.scope: Deactivated successfully. Mar 2 13:00:50.589910 systemd[1]: session-76.scope: Consumed 1.245s CPU time, 26.4M memory peak. Mar 2 13:00:50.644971 systemd-logind[1549]: Session 76 logged out. Waiting for processes to exit. Mar 2 13:00:50.687967 systemd[1]: Started sshd@76-10.0.0.17:22-10.0.0.1:48536.service - OpenSSH per-connection server daemon (10.0.0.1:48536). Mar 2 13:00:50.689967 systemd[1]: cri-containerd-45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490.scope: Deactivated successfully. Mar 2 13:00:50.690851 systemd[1]: cri-containerd-45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490.scope: Consumed 18.312s CPU time, 125M memory peak, 200K read from disk, 13.3M written to disk. Mar 2 13:00:50.699843 systemd-logind[1549]: Removed session 76. Mar 2 13:00:50.718404 containerd[1568]: time="2026-03-02T13:00:50.718160731Z" level=info msg="received container exit event container_id:\"45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490\" id:\"45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490\" pid:3446 exited_at:{seconds:1772456450 nanos:701646457}" Mar 2 13:00:50.742494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23-rootfs.mount: Deactivated successfully. Mar 2 13:00:50.847921 kubelet[2792]: E0302 13:00:50.847815 2792 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:00:50.977727 containerd[1568]: time="2026-03-02T13:00:50.977321850Z" level=info msg="StopContainer for \"08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23\" returns successfully" Mar 2 13:00:51.025784 containerd[1568]: time="2026-03-02T13:00:51.020838792Z" level=info msg="StopPodSandbox for \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\"" Mar 2 13:00:51.039114 containerd[1568]: time="2026-03-02T13:00:51.028250439Z" level=info msg="Container to stop \"08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:00:51.039114 containerd[1568]: time="2026-03-02T13:00:51.028294841Z" level=info msg="Container to stop \"f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:00:51.103947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490-rootfs.mount: Deactivated successfully. Mar 2 13:00:51.167599 systemd[1]: cri-containerd-21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd.scope: Deactivated successfully. Mar 2 13:00:51.193092 containerd[1568]: time="2026-03-02T13:00:51.183751328Z" level=info msg="received sandbox exit event container_id:\"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" id:\"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" exit_status:137 exited_at:{seconds:1772456451 nanos:172895909}" monitor_name=podsandbox Mar 2 13:00:51.233780 containerd[1568]: time="2026-03-02T13:00:51.233524414Z" level=info msg="StopContainer for \"45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490\" returns successfully" Mar 2 13:00:51.278717 containerd[1568]: time="2026-03-02T13:00:51.278660585Z" level=info msg="StopPodSandbox for \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\"" Mar 2 13:00:51.278995 containerd[1568]: time="2026-03-02T13:00:51.278971024Z" level=info msg="Container to stop \"d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:00:51.279111 containerd[1568]: time="2026-03-02T13:00:51.279090207Z" level=info msg="Container to stop \"45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:00:51.279224 containerd[1568]: time="2026-03-02T13:00:51.279202546Z" level=info msg="Container to stop \"12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:00:51.279319 containerd[1568]: time="2026-03-02T13:00:51.279296642Z" level=info msg="Container to stop \"84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:00:51.279408 containerd[1568]: time="2026-03-02T13:00:51.279388333Z" level=info msg="Container to stop \"bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:00:51.442311 sshd[5482]: Accepted publickey for core from 10.0.0.1 port 48536 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:00:51.444373 sshd-session[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:51.449561 systemd[1]: cri-containerd-9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28.scope: Deactivated successfully. Mar 2 13:00:51.505821 containerd[1568]: time="2026-03-02T13:00:51.505255611Z" level=info msg="received sandbox exit event container_id:\"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" id:\"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" exit_status:137 exited_at:{seconds:1772456451 nanos:503738665}" monitor_name=podsandbox Mar 2 13:00:51.604363 systemd-logind[1549]: New session 77 of user core. Mar 2 13:00:51.610808 systemd[1]: Started session-77.scope - Session 77 of User core. Mar 2 13:00:51.670667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd-rootfs.mount: Deactivated successfully. Mar 2 13:00:51.728273 containerd[1568]: time="2026-03-02T13:00:51.727930163Z" level=info msg="shim disconnected" id=21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd namespace=k8s.io Mar 2 13:00:51.728273 containerd[1568]: time="2026-03-02T13:00:51.727974255Z" level=warning msg="cleaning up after shim disconnected" id=21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd namespace=k8s.io Mar 2 13:00:51.728273 containerd[1568]: time="2026-03-02T13:00:51.727986658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:00:51.870185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28-rootfs.mount: Deactivated successfully. Mar 2 13:00:51.899265 containerd[1568]: time="2026-03-02T13:00:51.899142540Z" level=info msg="shim disconnected" id=9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28 namespace=k8s.io Mar 2 13:00:51.899265 containerd[1568]: time="2026-03-02T13:00:51.899205798Z" level=warning msg="cleaning up after shim disconnected" id=9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28 namespace=k8s.io Mar 2 13:00:51.899265 containerd[1568]: time="2026-03-02T13:00:51.899220756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:00:51.974240 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd-shm.mount: Deactivated successfully. Mar 2 13:00:51.976154 containerd[1568]: time="2026-03-02T13:00:51.975794594Z" level=info msg="TearDown network for sandbox \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" successfully" Mar 2 13:00:51.976154 containerd[1568]: time="2026-03-02T13:00:51.975833476Z" level=info msg="StopPodSandbox for \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" returns successfully" Mar 2 13:00:51.982101 kubelet[2792]: I0302 13:00:51.982069 2792 scope.go:117] "RemoveContainer" containerID="f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe" Mar 2 13:00:51.983933 containerd[1568]: time="2026-03-02T13:00:51.983857251Z" level=info msg="received sandbox container exit event sandbox_id:\"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" exit_status:137 exited_at:{seconds:1772456451 nanos:172895909}" monitor_name=criService Mar 2 13:00:51.994225 containerd[1568]: time="2026-03-02T13:00:51.994188514Z" level=info msg="RemoveContainer for \"f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe\"" Mar 2 13:00:52.019896 containerd[1568]: time="2026-03-02T13:00:52.017821465Z" level=info msg="received sandbox container exit event sandbox_id:\"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" exit_status:137 exited_at:{seconds:1772456451 nanos:503738665}" monitor_name=criService Mar 2 13:00:52.023565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28-shm.mount: Deactivated successfully. Mar 2 13:00:52.031278 containerd[1568]: time="2026-03-02T13:00:52.031205852Z" level=info msg="TearDown network for sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" successfully" Mar 2 13:00:52.032088 containerd[1568]: time="2026-03-02T13:00:52.031904314Z" level=info msg="StopPodSandbox for \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" returns successfully" Mar 2 13:00:52.102413 containerd[1568]: time="2026-03-02T13:00:52.102280789Z" level=info msg="RemoveContainer for \"f2bbdfda7b4eeaa580ac2d0de110cd1f2ad4e310643932e99f636e05ba43d7fe\" returns successfully" Mar 2 13:00:52.195295 kubelet[2792]: I0302 13:00:52.194969 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjkgk\" (UniqueName: \"kubernetes.io/projected/2a4de705-4910-4973-a6a4-c3c3945da20c-kube-api-access-zjkgk\") pod \"2a4de705-4910-4973-a6a4-c3c3945da20c\" (UID: \"2a4de705-4910-4973-a6a4-c3c3945da20c\") " Mar 2 13:00:52.195295 kubelet[2792]: I0302 13:00:52.195042 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a4de705-4910-4973-a6a4-c3c3945da20c-cilium-config-path\") pod \"2a4de705-4910-4973-a6a4-c3c3945da20c\" (UID: \"2a4de705-4910-4973-a6a4-c3c3945da20c\") " Mar 2 13:00:52.246338 kubelet[2792]: I0302 13:00:52.237861 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a4de705-4910-4973-a6a4-c3c3945da20c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a4de705-4910-4973-a6a4-c3c3945da20c" (UID: "2a4de705-4910-4973-a6a4-c3c3945da20c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:00:52.255210 systemd[1]: var-lib-kubelet-pods-2a4de705\x2d4910\x2d4973\x2da6a4\x2dc3c3945da20c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzjkgk.mount: Deactivated successfully. Mar 2 13:00:52.276507 kubelet[2792]: I0302 13:00:52.275815 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4de705-4910-4973-a6a4-c3c3945da20c-kube-api-access-zjkgk" (OuterVolumeSpecName: "kube-api-access-zjkgk") pod "2a4de705-4910-4973-a6a4-c3c3945da20c" (UID: "2a4de705-4910-4973-a6a4-c3c3945da20c"). InnerVolumeSpecName "kube-api-access-zjkgk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:00:52.297257 kubelet[2792]: I0302 13:00:52.297120 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1bc1ef55-2431-41ce-80df-9c574b5de752-clustermesh-secrets\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.300405 kubelet[2792]: I0302 13:00:52.297578 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-bpf-maps\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.300405 kubelet[2792]: I0302 13:00:52.297961 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-cni-path\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.300405 kubelet[2792]: I0302 13:00:52.298015 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bc1ef55-2431-41ce-80df-9c574b5de752-cilium-config-path\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.300405 kubelet[2792]: I0302 13:00:52.298050 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-host-proc-sys-net\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.300405 kubelet[2792]: I0302 13:00:52.298077 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-etc-cni-netd\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.300405 kubelet[2792]: I0302 13:00:52.298099 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-host-proc-sys-kernel\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.305589 kubelet[2792]: I0302 13:00:52.298117 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-hostproc\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.305589 kubelet[2792]: I0302 13:00:52.298144 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-729qn\" (UniqueName: \"kubernetes.io/projected/1bc1ef55-2431-41ce-80df-9c574b5de752-kube-api-access-729qn\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.305589 kubelet[2792]: I0302 13:00:52.298162 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-cilium-cgroup\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.305589 kubelet[2792]: I0302 13:00:52.298187 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-xtables-lock\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.305589 kubelet[2792]: I0302 13:00:52.298207 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-lib-modules\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.305589 kubelet[2792]: I0302 13:00:52.298231 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1bc1ef55-2431-41ce-80df-9c574b5de752-hubble-tls\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.305939 kubelet[2792]: I0302 13:00:52.298295 2792 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-cilium-run\") pod \"1bc1ef55-2431-41ce-80df-9c574b5de752\" (UID: \"1bc1ef55-2431-41ce-80df-9c574b5de752\") " Mar 2 13:00:52.305939 kubelet[2792]: I0302 13:00:52.298382 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zjkgk\" (UniqueName: \"kubernetes.io/projected/2a4de705-4910-4973-a6a4-c3c3945da20c-kube-api-access-zjkgk\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.305939 kubelet[2792]: I0302 13:00:52.298398 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a4de705-4910-4973-a6a4-c3c3945da20c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.305939 kubelet[2792]: I0302 13:00:52.298508 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:00:52.305939 kubelet[2792]: I0302 13:00:52.298563 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-cni-path" (OuterVolumeSpecName: "cni-path") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:00:52.305939 kubelet[2792]: I0302 13:00:52.301087 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:00:52.306205 kubelet[2792]: I0302 13:00:52.301184 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:00:52.306205 kubelet[2792]: I0302 13:00:52.301210 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:00:52.306205 kubelet[2792]: I0302 13:00:52.301553 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:00:52.306205 kubelet[2792]: I0302 13:00:52.303134 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:00:52.306205 kubelet[2792]: I0302 13:00:52.303167 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:00:52.306388 kubelet[2792]: I0302 13:00:52.303174 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-hostproc" (OuterVolumeSpecName: "hostproc") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:00:52.306573 kubelet[2792]: I0302 13:00:52.306506 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:00:52.315222 kubelet[2792]: I0302 13:00:52.315166 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bc1ef55-2431-41ce-80df-9c574b5de752-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:00:52.326277 systemd[1]: var-lib-kubelet-pods-1bc1ef55\x2d2431\x2d41ce\x2d80df\x2d9c574b5de752-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d729qn.mount: Deactivated successfully. Mar 2 13:00:52.396036 kubelet[2792]: I0302 13:00:52.386248 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc1ef55-2431-41ce-80df-9c574b5de752-kube-api-access-729qn" (OuterVolumeSpecName: "kube-api-access-729qn") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "kube-api-access-729qn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:00:52.402243 kubelet[2792]: I0302 13:00:52.400264 2792 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.402243 kubelet[2792]: I0302 13:00:52.400356 2792 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.402243 kubelet[2792]: I0302 13:00:52.400375 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bc1ef55-2431-41ce-80df-9c574b5de752-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.402243 kubelet[2792]: I0302 13:00:52.400387 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.402243 kubelet[2792]: I0302 13:00:52.400397 2792 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.402243 kubelet[2792]: I0302 13:00:52.400410 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.406917 kubelet[2792]: I0302 13:00:52.405285 2792 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.406917 kubelet[2792]: I0302 13:00:52.405334 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.406917 kubelet[2792]: I0302 13:00:52.405347 2792 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.406917 kubelet[2792]: I0302 13:00:52.405358 2792 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.406917 kubelet[2792]: I0302 13:00:52.405369 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1bc1ef55-2431-41ce-80df-9c574b5de752-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.406917 kubelet[2792]: I0302 13:00:52.405666 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc1ef55-2431-41ce-80df-9c574b5de752-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:00:52.415491 kubelet[2792]: I0302 13:00:52.415378 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bc1ef55-2431-41ce-80df-9c574b5de752-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1bc1ef55-2431-41ce-80df-9c574b5de752" (UID: "1bc1ef55-2431-41ce-80df-9c574b5de752"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:00:52.508562 kubelet[2792]: I0302 13:00:52.506735 2792 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1bc1ef55-2431-41ce-80df-9c574b5de752-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.508562 kubelet[2792]: I0302 13:00:52.506876 2792 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1bc1ef55-2431-41ce-80df-9c574b5de752-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.508562 kubelet[2792]: I0302 13:00:52.506984 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-729qn\" (UniqueName: \"kubernetes.io/projected/1bc1ef55-2431-41ce-80df-9c574b5de752-kube-api-access-729qn\") on node \"localhost\" DevicePath \"\"" Mar 2 13:00:52.561232 systemd[1]: Removed slice kubepods-burstable-pod1bc1ef55_2431_41ce_80df_9c574b5de752.slice - libcontainer container kubepods-burstable-pod1bc1ef55_2431_41ce_80df_9c574b5de752.slice. Mar 2 13:00:52.569265 systemd[1]: kubepods-burstable-pod1bc1ef55_2431_41ce_80df_9c574b5de752.slice: Consumed 18.541s CPU time, 125.4M memory peak, 284K read from disk, 16.6M written to disk. Mar 2 13:00:52.579841 systemd[1]: Removed slice kubepods-besteffort-pod2a4de705_4910_4973_a6a4_c3c3945da20c.slice - libcontainer container kubepods-besteffort-pod2a4de705_4910_4973_a6a4_c3c3945da20c.slice. Mar 2 13:00:52.581097 systemd[1]: kubepods-besteffort-pod2a4de705_4910_4973_a6a4_c3c3945da20c.slice: Consumed 4.771s CPU time, 29.4M memory peak, 1.6M read from disk, 8K written to disk. Mar 2 13:00:52.871824 systemd[1]: var-lib-kubelet-pods-1bc1ef55\x2d2431\x2d41ce\x2d80df\x2d9c574b5de752-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 2 13:00:52.872416 systemd[1]: var-lib-kubelet-pods-1bc1ef55\x2d2431\x2d41ce\x2d80df\x2d9c574b5de752-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 2 13:00:53.239758 kubelet[2792]: I0302 13:00:53.011363 2792 scope.go:117] "RemoveContainer" containerID="45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490" Mar 2 13:00:53.268021 containerd[1568]: time="2026-03-02T13:00:53.267975314Z" level=info msg="RemoveContainer for \"45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490\"" Mar 2 13:00:53.399381 containerd[1568]: time="2026-03-02T13:00:53.399324817Z" level=info msg="RemoveContainer for \"45288ac03296e945bc049fb6fdc0d7f0a6b0f1b01ceccf096a337c1e49fd5490\" returns successfully" Mar 2 13:00:53.404232 kubelet[2792]: I0302 13:00:53.403558 2792 scope.go:117] "RemoveContainer" containerID="bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c" Mar 2 13:00:53.411726 containerd[1568]: time="2026-03-02T13:00:53.406575572Z" level=info msg="RemoveContainer for \"bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c\"" Mar 2 13:00:53.472364 containerd[1568]: time="2026-03-02T13:00:53.472206396Z" level=info msg="RemoveContainer for \"bda41957130fcbed0caa1da71bb88aa33a39932fbeaf8dad8ac819df1020b52c\" returns successfully" Mar 2 13:00:53.479357 kubelet[2792]: I0302 13:00:53.479115 2792 scope.go:117] "RemoveContainer" containerID="d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512" Mar 2 13:00:53.501138 containerd[1568]: time="2026-03-02T13:00:53.500923752Z" level=info msg="RemoveContainer for \"d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512\"" Mar 2 13:00:53.529326 containerd[1568]: time="2026-03-02T13:00:53.529235380Z" level=info msg="RemoveContainer for \"d71e97504cf212d28091bd7f04ce2cc5446eca6ef5a331d403f18b2373f5b512\" returns successfully" Mar 2 13:00:53.534348 kubelet[2792]: I0302 13:00:53.532715 2792 scope.go:117] "RemoveContainer" containerID="84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b" Mar 2 13:00:53.546260 containerd[1568]: time="2026-03-02T13:00:53.545543319Z" level=info msg="RemoveContainer for \"84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b\"" Mar 2 13:00:53.587985 containerd[1568]: time="2026-03-02T13:00:53.587929613Z" level=info msg="RemoveContainer for \"84c377ef6faf780da032b382d50b3851cad387d3e4e2838f35a23f392e75073b\" returns successfully" Mar 2 13:00:53.591810 kubelet[2792]: I0302 13:00:53.590926 2792 scope.go:117] "RemoveContainer" containerID="12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc" Mar 2 13:00:53.612126 containerd[1568]: time="2026-03-02T13:00:53.610360705Z" level=info msg="RemoveContainer for \"12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc\"" Mar 2 13:00:53.701415 containerd[1568]: time="2026-03-02T13:00:53.701260283Z" level=info msg="RemoveContainer for \"12477c232546756e9bf9ff9588256af182be683d66d24d5dd10fba4d235caacc\" returns successfully" Mar 2 13:00:53.710530 kubelet[2792]: I0302 13:00:53.703597 2792 scope.go:117] "RemoveContainer" containerID="08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23" Mar 2 13:00:53.730020 containerd[1568]: time="2026-03-02T13:00:53.725870859Z" level=info msg="RemoveContainer for \"08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23\"" Mar 2 13:00:53.794314 containerd[1568]: time="2026-03-02T13:00:53.793617775Z" level=info msg="RemoveContainer for \"08ae7c8240bab8b267539e3b5cbfdfdd384b8d76822aab39e9a8d3231b02de23\" returns successfully" Mar 2 13:00:54.524128 kubelet[2792]: I0302 13:00:54.503206 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bc1ef55-2431-41ce-80df-9c574b5de752" path="/var/lib/kubelet/pods/1bc1ef55-2431-41ce-80df-9c574b5de752/volumes" Mar 2 13:00:54.526323 kubelet[2792]: I0302 13:00:54.526275 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a4de705-4910-4973-a6a4-c3c3945da20c" path="/var/lib/kubelet/pods/2a4de705-4910-4973-a6a4-c3c3945da20c/volumes" Mar 2 13:00:55.837926 sshd[5529]: Connection closed by 10.0.0.1 port 48536 Mar 2 13:00:55.828970 sshd-session[5482]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:55.881117 kubelet[2792]: E0302 13:00:55.880975 2792 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:00:55.929584 systemd[1]: sshd@76-10.0.0.17:22-10.0.0.1:48536.service: Deactivated successfully. Mar 2 13:00:55.932897 systemd[1]: session-77.scope: Deactivated successfully. Mar 2 13:00:55.933251 systemd[1]: session-77.scope: Consumed 1.009s CPU time, 25M memory peak. Mar 2 13:00:55.942916 systemd-logind[1549]: Session 77 logged out. Waiting for processes to exit. Mar 2 13:00:55.947001 systemd[1]: Started sshd@77-10.0.0.17:22-10.0.0.1:48550.service - OpenSSH per-connection server daemon (10.0.0.1:48550). Mar 2 13:00:55.971051 systemd-logind[1549]: Removed session 77. Mar 2 13:00:57.905944 kubelet[2792]: I0302 13:00:57.905167 2792 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-02T13:00:57Z","lastTransitionTime":"2026-03-02T13:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 2 13:00:57.918807 sshd[5577]: Accepted publickey for core from 10.0.0.1 port 48550 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:00:57.931780 sshd-session[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:58.189109 systemd-logind[1549]: New session 78 of user core. Mar 2 13:00:58.230375 systemd[1]: Started session-78.scope - Session 78 of User core. Mar 2 13:00:58.241135 kubelet[2792]: I0302 13:00:58.232815 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0db0b34-f2bd-42d2-8747-af428d7319d9-hostproc\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.241135 kubelet[2792]: I0302 13:00:58.232856 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0db0b34-f2bd-42d2-8747-af428d7319d9-cilium-run\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.241135 kubelet[2792]: I0302 13:00:58.232878 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0db0b34-f2bd-42d2-8747-af428d7319d9-etc-cni-netd\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.241135 kubelet[2792]: I0302 13:00:58.232902 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0db0b34-f2bd-42d2-8747-af428d7319d9-xtables-lock\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.241135 kubelet[2792]: I0302 13:00:58.232922 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0db0b34-f2bd-42d2-8747-af428d7319d9-bpf-maps\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.241135 kubelet[2792]: I0302 13:00:58.232940 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0db0b34-f2bd-42d2-8747-af428d7319d9-cni-path\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.241360 kubelet[2792]: I0302 13:00:58.232992 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0db0b34-f2bd-42d2-8747-af428d7319d9-cilium-config-path\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.241360 kubelet[2792]: I0302 13:00:58.233012 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0db0b34-f2bd-42d2-8747-af428d7319d9-host-proc-sys-net\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.241360 kubelet[2792]: I0302 13:00:58.233060 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0db0b34-f2bd-42d2-8747-af428d7319d9-cilium-cgroup\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.241360 kubelet[2792]: I0302 13:00:58.233080 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0db0b34-f2bd-42d2-8747-af428d7319d9-clustermesh-secrets\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.241360 kubelet[2792]: I0302 13:00:58.233102 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnf4\" (UniqueName: \"kubernetes.io/projected/c0db0b34-f2bd-42d2-8747-af428d7319d9-kube-api-access-wtnf4\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.254331 kubelet[2792]: I0302 13:00:58.233122 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c0db0b34-f2bd-42d2-8747-af428d7319d9-cilium-ipsec-secrets\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.254331 kubelet[2792]: I0302 13:00:58.233140 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0db0b34-f2bd-42d2-8747-af428d7319d9-host-proc-sys-kernel\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.254331 kubelet[2792]: I0302 13:00:58.233164 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0db0b34-f2bd-42d2-8747-af428d7319d9-lib-modules\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.254331 kubelet[2792]: I0302 13:00:58.233182 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0db0b34-f2bd-42d2-8747-af428d7319d9-hubble-tls\") pod \"cilium-psrl5\" (UID: \"c0db0b34-f2bd-42d2-8747-af428d7319d9\") " pod="kube-system/cilium-psrl5" Mar 2 13:00:58.697828 sshd[5580]: Connection closed by 10.0.0.1 port 48550 Mar 2 13:00:58.711216 sshd-session[5577]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:59.025194 systemd[1]: sshd@77-10.0.0.17:22-10.0.0.1:48550.service: Deactivated successfully. Mar 2 13:00:59.129487 systemd[1]: session-78.scope: Deactivated successfully. Mar 2 13:00:59.207980 systemd[1]: Created slice kubepods-burstable-podc0db0b34_f2bd_42d2_8747_af428d7319d9.slice - libcontainer container kubepods-burstable-podc0db0b34_f2bd_42d2_8747_af428d7319d9.slice. Mar 2 13:00:59.214080 systemd-logind[1549]: Session 78 logged out. Waiting for processes to exit. Mar 2 13:00:59.232144 systemd[1]: Started sshd@78-10.0.0.17:22-10.0.0.1:48566.service - OpenSSH per-connection server daemon (10.0.0.1:48566). Mar 2 13:00:59.282395 systemd-logind[1549]: Removed session 78. Mar 2 13:00:59.325410 kubelet[2792]: E0302 13:00:59.325344 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:59.390634 containerd[1568]: time="2026-03-02T13:00:59.390393925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-psrl5,Uid:c0db0b34-f2bd-42d2-8747-af428d7319d9,Namespace:kube-system,Attempt:0,}" Mar 2 13:00:59.696990 containerd[1568]: time="2026-03-02T13:00:59.696760289Z" level=info msg="connecting to shim 283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6" address="unix:///run/containerd/s/43d09004be6604518ef230b439ad29f9bf6c446d058a6ac7cbbecd9f42121ea0" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:00:59.865864 systemd[1]: Started cri-containerd-283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6.scope - libcontainer container 283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6. Mar 2 13:00:59.943958 sshd[5591]: Accepted publickey for core from 10.0.0.1 port 48566 ssh2: RSA SHA256:MoH5p4fqEp8GuzeBMk4Pqqn5DVbEIQNAdzyj1G89qX4 Mar 2 13:00:59.957386 sshd-session[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:00.009283 systemd-logind[1549]: New session 79 of user core. Mar 2 13:01:00.032547 systemd[1]: Started session-79.scope - Session 79 of User core. Mar 2 13:01:00.349752 containerd[1568]: time="2026-03-02T13:01:00.330243894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-psrl5,Uid:c0db0b34-f2bd-42d2-8747-af428d7319d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6\"" Mar 2 13:01:00.357662 kubelet[2792]: E0302 13:01:00.355809 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:00.441948 containerd[1568]: time="2026-03-02T13:01:00.437644792Z" level=info msg="CreateContainer within sandbox \"283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:01:00.603171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183090855.mount: Deactivated successfully. Mar 2 13:01:00.623510 containerd[1568]: time="2026-03-02T13:01:00.620072982Z" level=info msg="Container aed1da729826546e7360887eb78daf1592243ed8fa65dc8898ba9f33b5b7225a: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:01:00.685716 containerd[1568]: time="2026-03-02T13:01:00.685544859Z" level=info msg="CreateContainer within sandbox \"283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aed1da729826546e7360887eb78daf1592243ed8fa65dc8898ba9f33b5b7225a\"" Mar 2 13:01:00.783849 containerd[1568]: time="2026-03-02T13:01:00.780934329Z" level=info msg="StartContainer for \"aed1da729826546e7360887eb78daf1592243ed8fa65dc8898ba9f33b5b7225a\"" Mar 2 13:01:00.795203 containerd[1568]: time="2026-03-02T13:01:00.793659656Z" level=info msg="connecting to shim aed1da729826546e7360887eb78daf1592243ed8fa65dc8898ba9f33b5b7225a" address="unix:///run/containerd/s/43d09004be6604518ef230b439ad29f9bf6c446d058a6ac7cbbecd9f42121ea0" protocol=ttrpc version=3 Mar 2 13:01:00.962353 kubelet[2792]: E0302 13:01:00.959852 2792 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:01:01.145261 systemd[1]: Started cri-containerd-aed1da729826546e7360887eb78daf1592243ed8fa65dc8898ba9f33b5b7225a.scope - libcontainer container aed1da729826546e7360887eb78daf1592243ed8fa65dc8898ba9f33b5b7225a. Mar 2 13:01:01.481261 containerd[1568]: time="2026-03-02T13:01:01.481092076Z" level=info msg="StartContainer for \"aed1da729826546e7360887eb78daf1592243ed8fa65dc8898ba9f33b5b7225a\" returns successfully" Mar 2 13:01:01.503760 systemd[1]: cri-containerd-aed1da729826546e7360887eb78daf1592243ed8fa65dc8898ba9f33b5b7225a.scope: Deactivated successfully. Mar 2 13:01:01.537843 containerd[1568]: time="2026-03-02T13:01:01.531670662Z" level=info msg="received container exit event container_id:\"aed1da729826546e7360887eb78daf1592243ed8fa65dc8898ba9f33b5b7225a\" id:\"aed1da729826546e7360887eb78daf1592243ed8fa65dc8898ba9f33b5b7225a\" pid:5659 exited_at:{seconds:1772456461 nanos:531230081}" Mar 2 13:01:01.746942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aed1da729826546e7360887eb78daf1592243ed8fa65dc8898ba9f33b5b7225a-rootfs.mount: Deactivated successfully. Mar 2 13:01:01.905881 kubelet[2792]: E0302 13:01:01.902958 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:02.889942 kubelet[2792]: E0302 13:01:02.888406 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:02.944514 containerd[1568]: time="2026-03-02T13:01:02.944381843Z" level=info msg="CreateContainer within sandbox \"283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:01:03.147610 containerd[1568]: time="2026-03-02T13:01:03.143245497Z" level=info msg="Container a3d056748fcfa64e63ac249e5e1f0123ebd8ffb9f635326cd7b6f843c3fbf489: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:01:03.206527 containerd[1568]: time="2026-03-02T13:01:03.200142360Z" level=info msg="CreateContainer within sandbox \"283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a3d056748fcfa64e63ac249e5e1f0123ebd8ffb9f635326cd7b6f843c3fbf489\"" Mar 2 13:01:03.206527 containerd[1568]: time="2026-03-02T13:01:03.201393175Z" level=info msg="StartContainer for \"a3d056748fcfa64e63ac249e5e1f0123ebd8ffb9f635326cd7b6f843c3fbf489\"" Mar 2 13:01:03.210878 containerd[1568]: time="2026-03-02T13:01:03.210247571Z" level=info msg="connecting to shim a3d056748fcfa64e63ac249e5e1f0123ebd8ffb9f635326cd7b6f843c3fbf489" address="unix:///run/containerd/s/43d09004be6604518ef230b439ad29f9bf6c446d058a6ac7cbbecd9f42121ea0" protocol=ttrpc version=3 Mar 2 13:01:03.394598 systemd[1]: Started cri-containerd-a3d056748fcfa64e63ac249e5e1f0123ebd8ffb9f635326cd7b6f843c3fbf489.scope - libcontainer container a3d056748fcfa64e63ac249e5e1f0123ebd8ffb9f635326cd7b6f843c3fbf489. Mar 2 13:01:03.583475 update_engine[1558]: I20260302 13:01:03.580579 1558 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 2 13:01:03.583475 update_engine[1558]: I20260302 13:01:03.580807 1558 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 2 13:01:03.583475 update_engine[1558]: I20260302 13:01:03.581367 1558 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 2 13:01:03.587841 update_engine[1558]: I20260302 13:01:03.587778 1558 omaha_request_params.cc:62] Current group set to stable Mar 2 13:01:03.590352 update_engine[1558]: I20260302 13:01:03.588528 1558 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 2 13:01:03.590352 update_engine[1558]: I20260302 13:01:03.588552 1558 update_attempter.cc:643] Scheduling an action processor start. Mar 2 13:01:03.590352 update_engine[1558]: I20260302 13:01:03.588577 1558 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 2 13:01:03.590352 update_engine[1558]: I20260302 13:01:03.588735 1558 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 2 13:01:03.590352 update_engine[1558]: I20260302 13:01:03.588879 1558 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 2 13:01:03.590352 update_engine[1558]: I20260302 13:01:03.588896 1558 omaha_request_action.cc:272] Request: Mar 2 13:01:03.590352 update_engine[1558]: Mar 2 13:01:03.590352 update_engine[1558]: Mar 2 13:01:03.590352 update_engine[1558]: Mar 2 13:01:03.590352 update_engine[1558]: Mar 2 13:01:03.590352 update_engine[1558]: Mar 2 13:01:03.590352 update_engine[1558]: Mar 2 13:01:03.590352 update_engine[1558]: Mar 2 13:01:03.590352 update_engine[1558]: Mar 2 13:01:03.590352 update_engine[1558]: I20260302 13:01:03.588909 1558 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 13:01:03.615724 update_engine[1558]: I20260302 13:01:03.615631 1558 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 13:01:03.618549 locksmithd[1593]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 2 13:01:03.620987 update_engine[1558]: I20260302 13:01:03.620909 1558 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 13:01:03.634578 containerd[1568]: time="2026-03-02T13:01:03.633851282Z" level=info msg="StartContainer for \"a3d056748fcfa64e63ac249e5e1f0123ebd8ffb9f635326cd7b6f843c3fbf489\" returns successfully" Mar 2 13:01:03.639411 update_engine[1558]: E20260302 13:01:03.639308 1558 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 13:01:03.639846 update_engine[1558]: I20260302 13:01:03.639786 1558 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 2 13:01:03.695902 systemd[1]: cri-containerd-a3d056748fcfa64e63ac249e5e1f0123ebd8ffb9f635326cd7b6f843c3fbf489.scope: Deactivated successfully. Mar 2 13:01:03.700252 containerd[1568]: time="2026-03-02T13:01:03.699774689Z" level=info msg="received container exit event container_id:\"a3d056748fcfa64e63ac249e5e1f0123ebd8ffb9f635326cd7b6f843c3fbf489\" id:\"a3d056748fcfa64e63ac249e5e1f0123ebd8ffb9f635326cd7b6f843c3fbf489\" pid:5703 exited_at:{seconds:1772456463 nanos:697231274}" Mar 2 13:01:03.851969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3d056748fcfa64e63ac249e5e1f0123ebd8ffb9f635326cd7b6f843c3fbf489-rootfs.mount: Deactivated successfully. Mar 2 13:01:03.928938 kubelet[2792]: E0302 13:01:03.918764 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:04.925956 kubelet[2792]: E0302 13:01:04.924919 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:04.984177 containerd[1568]: time="2026-03-02T13:01:04.984091185Z" level=info msg="CreateContainer within sandbox \"283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:01:05.173631 containerd[1568]: time="2026-03-02T13:01:05.173551185Z" level=info msg="Container 106357a95184acc8fbbd07bd5c0b98d6db3216f59ab239d068867fc139175d4c: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:01:05.229091 containerd[1568]: time="2026-03-02T13:01:05.228058795Z" level=info msg="CreateContainer within sandbox \"283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"106357a95184acc8fbbd07bd5c0b98d6db3216f59ab239d068867fc139175d4c\"" Mar 2 13:01:05.233487 containerd[1568]: time="2026-03-02T13:01:05.233287119Z" level=info msg="StartContainer for \"106357a95184acc8fbbd07bd5c0b98d6db3216f59ab239d068867fc139175d4c\"" Mar 2 13:01:05.242688 containerd[1568]: time="2026-03-02T13:01:05.242631257Z" level=info msg="connecting to shim 106357a95184acc8fbbd07bd5c0b98d6db3216f59ab239d068867fc139175d4c" address="unix:///run/containerd/s/43d09004be6604518ef230b439ad29f9bf6c446d058a6ac7cbbecd9f42121ea0" protocol=ttrpc version=3 Mar 2 13:01:05.417685 systemd[1]: Started cri-containerd-106357a95184acc8fbbd07bd5c0b98d6db3216f59ab239d068867fc139175d4c.scope - libcontainer container 106357a95184acc8fbbd07bd5c0b98d6db3216f59ab239d068867fc139175d4c. Mar 2 13:01:05.445559 kubelet[2792]: E0302 13:01:05.445342 2792 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-qtx5m" podUID="6325fa4b-1755-48b4-b3f7-f25b8f6ed550" Mar 2 13:01:05.781635 systemd[1]: cri-containerd-106357a95184acc8fbbd07bd5c0b98d6db3216f59ab239d068867fc139175d4c.scope: Deactivated successfully. Mar 2 13:01:05.804017 containerd[1568]: time="2026-03-02T13:01:05.801924975Z" level=info msg="received container exit event container_id:\"106357a95184acc8fbbd07bd5c0b98d6db3216f59ab239d068867fc139175d4c\" id:\"106357a95184acc8fbbd07bd5c0b98d6db3216f59ab239d068867fc139175d4c\" pid:5747 exited_at:{seconds:1772456465 nanos:801218836}" Mar 2 13:01:05.881670 containerd[1568]: time="2026-03-02T13:01:05.880827012Z" level=info msg="StartContainer for \"106357a95184acc8fbbd07bd5c0b98d6db3216f59ab239d068867fc139175d4c\" returns successfully" Mar 2 13:01:05.986901 kubelet[2792]: E0302 13:01:05.986597 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:05.989527 kubelet[2792]: E0302 13:01:05.989402 2792 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:01:06.043209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-106357a95184acc8fbbd07bd5c0b98d6db3216f59ab239d068867fc139175d4c-rootfs.mount: Deactivated successfully. Mar 2 13:01:07.044460 kubelet[2792]: E0302 13:01:07.043705 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:07.122511 containerd[1568]: time="2026-03-02T13:01:07.122392363Z" level=info msg="CreateContainer within sandbox \"283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:01:07.240510 containerd[1568]: time="2026-03-02T13:01:07.240379327Z" level=info msg="Container b4438f4cc3f9c8645946be92c4f1f43599a3531a3bc993bdc22da216135c5616: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:01:07.286872 containerd[1568]: time="2026-03-02T13:01:07.286634496Z" level=info msg="CreateContainer within sandbox \"283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b4438f4cc3f9c8645946be92c4f1f43599a3531a3bc993bdc22da216135c5616\"" Mar 2 13:01:07.297247 containerd[1568]: time="2026-03-02T13:01:07.293377985Z" level=info msg="StartContainer for \"b4438f4cc3f9c8645946be92c4f1f43599a3531a3bc993bdc22da216135c5616\"" Mar 2 13:01:07.315853 containerd[1568]: time="2026-03-02T13:01:07.306874040Z" level=info msg="connecting to shim b4438f4cc3f9c8645946be92c4f1f43599a3531a3bc993bdc22da216135c5616" address="unix:///run/containerd/s/43d09004be6604518ef230b439ad29f9bf6c446d058a6ac7cbbecd9f42121ea0" protocol=ttrpc version=3 Mar 2 13:01:07.467885 kubelet[2792]: E0302 13:01:07.448153 2792 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-qtx5m" podUID="6325fa4b-1755-48b4-b3f7-f25b8f6ed550" Mar 2 13:01:07.504596 systemd[1]: Started cri-containerd-b4438f4cc3f9c8645946be92c4f1f43599a3531a3bc993bdc22da216135c5616.scope - libcontainer container b4438f4cc3f9c8645946be92c4f1f43599a3531a3bc993bdc22da216135c5616. Mar 2 13:01:07.806307 systemd[1]: cri-containerd-b4438f4cc3f9c8645946be92c4f1f43599a3531a3bc993bdc22da216135c5616.scope: Deactivated successfully. Mar 2 13:01:07.849006 containerd[1568]: time="2026-03-02T13:01:07.844342480Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0db0b34_f2bd_42d2_8747_af428d7319d9.slice/cri-containerd-b4438f4cc3f9c8645946be92c4f1f43599a3531a3bc993bdc22da216135c5616.scope/memory.events\": no such file or directory" Mar 2 13:01:07.900145 containerd[1568]: time="2026-03-02T13:01:07.895413594Z" level=info msg="received container exit event container_id:\"b4438f4cc3f9c8645946be92c4f1f43599a3531a3bc993bdc22da216135c5616\" id:\"b4438f4cc3f9c8645946be92c4f1f43599a3531a3bc993bdc22da216135c5616\" pid:5788 exited_at:{seconds:1772456467 nanos:838531720}" Mar 2 13:01:07.965102 containerd[1568]: time="2026-03-02T13:01:07.964334845Z" level=info msg="StartContainer for \"b4438f4cc3f9c8645946be92c4f1f43599a3531a3bc993bdc22da216135c5616\" returns successfully" Mar 2 13:01:08.161494 kubelet[2792]: E0302 13:01:08.148596 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:08.183264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4438f4cc3f9c8645946be92c4f1f43599a3531a3bc993bdc22da216135c5616-rootfs.mount: Deactivated successfully. Mar 2 13:01:09.220278 kubelet[2792]: E0302 13:01:09.219936 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:09.289658 containerd[1568]: time="2026-03-02T13:01:09.289259707Z" level=info msg="CreateContainer within sandbox \"283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:01:09.394533 containerd[1568]: time="2026-03-02T13:01:09.392107652Z" level=info msg="Container 9436510ff719e071b82d4c2c3f3ebce89458f3cfe1c0065f95c6a349cda7f121: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:01:09.398972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796748242.mount: Deactivated successfully. Mar 2 13:01:09.423932 containerd[1568]: time="2026-03-02T13:01:09.420202399Z" level=info msg="CreateContainer within sandbox \"283af399ff15719d2424bccaf15bdabfa8ea792b58603fbf2f91e463de8be6a6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9436510ff719e071b82d4c2c3f3ebce89458f3cfe1c0065f95c6a349cda7f121\"" Mar 2 13:01:09.423932 containerd[1568]: time="2026-03-02T13:01:09.421102589Z" level=info msg="StartContainer for \"9436510ff719e071b82d4c2c3f3ebce89458f3cfe1c0065f95c6a349cda7f121\"" Mar 2 13:01:09.434975 containerd[1568]: time="2026-03-02T13:01:09.431614296Z" level=info msg="connecting to shim 9436510ff719e071b82d4c2c3f3ebce89458f3cfe1c0065f95c6a349cda7f121" address="unix:///run/containerd/s/43d09004be6604518ef230b439ad29f9bf6c446d058a6ac7cbbecd9f42121ea0" protocol=ttrpc version=3 Mar 2 13:01:09.456879 kubelet[2792]: E0302 13:01:09.447612 2792 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-qtx5m" podUID="6325fa4b-1755-48b4-b3f7-f25b8f6ed550" Mar 2 13:01:09.549813 systemd[1]: Started cri-containerd-9436510ff719e071b82d4c2c3f3ebce89458f3cfe1c0065f95c6a349cda7f121.scope - libcontainer container 9436510ff719e071b82d4c2c3f3ebce89458f3cfe1c0065f95c6a349cda7f121. Mar 2 13:01:09.886253 containerd[1568]: time="2026-03-02T13:01:09.886091620Z" level=info msg="StartContainer for \"9436510ff719e071b82d4c2c3f3ebce89458f3cfe1c0065f95c6a349cda7f121\" returns successfully" Mar 2 13:01:11.314491 kubelet[2792]: E0302 13:01:11.314230 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:11.452018 kubelet[2792]: E0302 13:01:11.447218 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:12.576937 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 2 13:01:13.310511 kubelet[2792]: E0302 13:01:13.310295 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:13.575110 update_engine[1558]: I20260302 13:01:13.574854 1558 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 13:01:13.575110 update_engine[1558]: I20260302 13:01:13.575026 1558 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 13:01:13.580574 update_engine[1558]: I20260302 13:01:13.575603 1558 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 13:01:13.598719 update_engine[1558]: E20260302 13:01:13.598541 1558 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 13:01:13.598719 update_engine[1558]: I20260302 13:01:13.598673 1558 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 2 13:01:15.604108 kubelet[2792]: E0302 13:01:15.555219 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:21.352960 kubelet[2792]: E0302 13:01:21.351735 2792 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.907s" Mar 2 13:01:23.654188 update_engine[1558]: I20260302 13:01:23.584666 1558 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 13:01:23.785606 update_engine[1558]: I20260302 13:01:23.699868 1558 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 13:01:23.785606 update_engine[1558]: I20260302 13:01:23.718713 1558 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 13:01:23.785606 update_engine[1558]: E20260302 13:01:23.774721 1558 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 13:01:23.785606 update_engine[1558]: I20260302 13:01:23.775542 1558 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 2 13:01:28.450814 kubelet[2792]: E0302 13:01:28.448309 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:29.309044 kubelet[2792]: E0302 13:01:29.301601 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:30.454022 kubelet[2792]: E0302 13:01:30.451809 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:32.015524 kubelet[2792]: E0302 13:01:31.990042 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:35.917045 update_engine[1558]: I20260302 13:01:35.295297 1558 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 13:01:35.917045 update_engine[1558]: I20260302 13:01:35.887357 1558 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 13:01:36.144185 update_engine[1558]: I20260302 13:01:36.142053 1558 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 13:01:36.144185 update_engine[1558]: E20260302 13:01:36.142377 1558 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 13:01:36.159855 update_engine[1558]: I20260302 13:01:36.149399 1558 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 2 13:01:36.159855 update_engine[1558]: I20260302 13:01:36.152190 1558 omaha_request_action.cc:617] Omaha request response: Mar 2 13:01:36.300610 update_engine[1558]: E20260302 13:01:36.246604 1558 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 2 13:01:36.394143 update_engine[1558]: I20260302 13:01:36.391165 1558 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 2 13:01:36.394143 update_engine[1558]: I20260302 13:01:36.391238 1558 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 2 13:01:36.394143 update_engine[1558]: I20260302 13:01:36.391252 1558 update_attempter.cc:306] Processing Done. Mar 2 13:01:36.394143 update_engine[1558]: E20260302 13:01:36.391276 1558 update_attempter.cc:619] Update failed. Mar 2 13:01:36.394143 update_engine[1558]: I20260302 13:01:36.391330 1558 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 2 13:01:36.394143 update_engine[1558]: I20260302 13:01:36.391345 1558 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 2 13:01:36.394143 update_engine[1558]: I20260302 13:01:36.391356 1558 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 2 13:01:36.401825 update_engine[1558]: I20260302 13:01:36.401371 1558 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 2 13:01:36.411478 update_engine[1558]: I20260302 13:01:36.410604 1558 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 2 13:01:36.412792 update_engine[1558]: I20260302 13:01:36.411782 1558 omaha_request_action.cc:272] Request: Mar 2 13:01:36.412792 update_engine[1558]: Mar 2 13:01:36.412792 update_engine[1558]: Mar 2 13:01:36.412792 update_engine[1558]: Mar 2 13:01:36.412792 update_engine[1558]: Mar 2 13:01:36.412792 update_engine[1558]: Mar 2 13:01:36.412792 update_engine[1558]: Mar 2 13:01:36.418198 locksmithd[1593]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 2 13:01:36.426148 update_engine[1558]: I20260302 13:01:36.426080 1558 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 13:01:36.426348 update_engine[1558]: I20260302 13:01:36.426327 1558 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 13:01:36.427164 update_engine[1558]: I20260302 13:01:36.427134 1558 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 13:01:36.454606 update_engine[1558]: E20260302 13:01:36.454370 1558 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 13:01:36.458128 update_engine[1558]: I20260302 13:01:36.458081 1558 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 2 13:01:36.458279 update_engine[1558]: I20260302 13:01:36.458257 1558 omaha_request_action.cc:617] Omaha request response: Mar 2 13:01:36.458353 update_engine[1558]: I20260302 13:01:36.458334 1558 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 2 13:01:36.458679 update_engine[1558]: I20260302 13:01:36.458403 1558 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 2 13:01:36.458770 update_engine[1558]: I20260302 13:01:36.458748 1558 update_attempter.cc:306] Processing Done. Mar 2 13:01:36.458839 update_engine[1558]: I20260302 13:01:36.458819 1558 update_attempter.cc:310] Error event sent. Mar 2 13:01:36.464376 update_engine[1558]: I20260302 13:01:36.464200 1558 update_check_scheduler.cc:74] Next update check in 43m23s Mar 2 13:01:36.718478 kubelet[2792]: E0302 13:01:36.704649 2792 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.107s" Mar 2 13:01:36.748776 locksmithd[1593]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 2 13:01:43.676824 systemd-networkd[1479]: lxc_health: Link UP Mar 2 13:01:43.758534 systemd-networkd[1479]: lxc_health: Gained carrier Mar 2 13:01:45.305662 kubelet[2792]: E0302 13:01:45.305330 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:45.386526 systemd-networkd[1479]: lxc_health: Gained IPv6LL Mar 2 13:01:45.449854 kubelet[2792]: I0302 13:01:45.449200 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-psrl5" podStartSLOduration=48.449154872 podStartE2EDuration="48.449154872s" podCreationTimestamp="2026-03-02 13:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:01:11.461673189 +0000 UTC m=+509.221089937" watchObservedRunningTime="2026-03-02 13:01:45.449154872 +0000 UTC m=+543.208571621" Mar 2 13:01:46.107519 kubelet[2792]: E0302 13:01:46.105928 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:46.384616 containerd[1568]: time="2026-03-02T13:01:46.360929220Z" level=info msg="StopPodSandbox for \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\"" Mar 2 13:01:46.389586 containerd[1568]: time="2026-03-02T13:01:46.385830469Z" level=info msg="TearDown network for sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" successfully" Mar 2 13:01:46.392314 containerd[1568]: time="2026-03-02T13:01:46.392225288Z" level=info msg="StopPodSandbox for \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" returns successfully" Mar 2 13:01:46.545080 containerd[1568]: time="2026-03-02T13:01:46.414541034Z" level=info msg="RemovePodSandbox for \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\"" Mar 2 13:01:46.545080 containerd[1568]: time="2026-03-02T13:01:46.414696203Z" level=info msg="Forcibly stopping sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\"" Mar 2 13:01:46.545080 containerd[1568]: time="2026-03-02T13:01:46.415532564Z" level=info msg="TearDown network for sandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" successfully" Mar 2 13:01:46.545080 containerd[1568]: time="2026-03-02T13:01:46.436884833Z" level=info msg="Ensure that sandbox 9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28 in task-service has been cleanup successfully" Mar 2 13:01:46.545080 containerd[1568]: time="2026-03-02T13:01:46.489640021Z" level=info msg="RemovePodSandbox \"9c774cef561cba03582534b00779e1843c0ac1e1bf94537bd58d51b4b317cd28\" returns successfully" Mar 2 13:01:46.545080 containerd[1568]: time="2026-03-02T13:01:46.490258504Z" level=info msg="StopPodSandbox for \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\"" Mar 2 13:01:46.545080 containerd[1568]: time="2026-03-02T13:01:46.490416930Z" level=info msg="TearDown network for sandbox \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" successfully" Mar 2 13:01:46.545080 containerd[1568]: time="2026-03-02T13:01:46.490525753Z" level=info msg="StopPodSandbox for \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" returns successfully" Mar 2 13:01:46.545080 containerd[1568]: time="2026-03-02T13:01:46.490774447Z" level=info msg="RemovePodSandbox for \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\"" Mar 2 13:01:46.545080 containerd[1568]: time="2026-03-02T13:01:46.490807969Z" level=info msg="Forcibly stopping sandbox \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\"" Mar 2 13:01:46.545080 containerd[1568]: time="2026-03-02T13:01:46.490879422Z" level=info msg="TearDown network for sandbox \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" successfully" Mar 2 13:01:46.545080 containerd[1568]: time="2026-03-02T13:01:46.493105125Z" level=info msg="Ensure that sandbox 21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd in task-service has been cleanup successfully" Mar 2 13:01:46.545653 containerd[1568]: time="2026-03-02T13:01:46.539306046Z" level=info msg="RemovePodSandbox \"21e5bad5acb308d23c613ae55f5be71096c2e195fe7f049cd2b90a28d5a41bdd\" returns successfully" Mar 2 13:01:50.293414 kubelet[2792]: E0302 13:01:50.292726 2792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48254->127.0.0.1:34163: write tcp 127.0.0.1:48254->127.0.0.1:34163: write: broken pipe Mar 2 13:01:57.492180 kubelet[2792]: E0302 13:01:57.492104 2792 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:04.209387 sshd[5632]: Connection closed by 10.0.0.1 port 48566 Mar 2 13:02:04.430192 sshd-session[5591]: pam_unix(sshd:session): session closed for user core Mar 2 13:02:04.816690 systemd-logind[1549]: Session 79 logged out. Waiting for processes to exit. Mar 2 13:02:04.819297 systemd[1]: sshd@78-10.0.0.17:22-10.0.0.1:48566.service: Deactivated successfully. Mar 2 13:02:04.829377 systemd[1]: session-79.scope: Deactivated successfully. Mar 2 13:02:04.830367 systemd[1]: session-79.scope: Consumed 1.940s CPU time, 29.8M memory peak. Mar 2 13:02:04.837202 systemd-logind[1549]: Removed session 79.