Mar 2 14:24:57.910127 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 10:28:24 -00 2026 Mar 2 14:24:57.910162 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 14:24:57.910173 kernel: BIOS-provided physical RAM map: Mar 2 14:24:57.910185 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 14:24:57.910193 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 14:24:57.911566 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 14:24:57.911577 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 14:24:57.911588 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 14:24:57.911597 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 2 14:24:57.911606 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 2 14:24:57.911615 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 2 14:24:57.911625 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 2 14:24:57.911638 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 2 14:24:57.911646 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 2 14:24:57.911656 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 2 14:24:57.911664 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 14:24:57.911673 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 2 14:24:57.911685 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 2 14:24:57.911695 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 2 14:24:57.911705 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 2 14:24:57.911715 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 2 14:24:57.911726 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 14:24:57.911735 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 2 14:24:57.911743 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 14:24:57.911751 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 2 14:24:57.911760 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 14:24:57.911768 kernel: NX (Execute Disable) protection: active Mar 2 14:24:57.911776 kernel: APIC: Static calls initialized Mar 2 14:24:57.911789 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Mar 2 14:24:57.911801 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Mar 2 14:24:57.911811 kernel: extended physical RAM map: Mar 2 14:24:57.911821 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 14:24:57.911832 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 14:24:57.911840 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 14:24:57.911849 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 14:24:57.911857 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 14:24:57.911866 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 2 14:24:57.911874 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 2 14:24:57.911882 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Mar 2 14:24:57.911897 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Mar 2 14:24:57.911912 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Mar 2 14:24:57.911922 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Mar 2 14:24:57.911934 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Mar 2 14:24:57.912045 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 2 14:24:57.912059 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 2 14:24:57.912068 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 2 14:24:57.912076 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 2 14:24:57.912086 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 14:24:57.912098 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 2 14:24:57.912108 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 2 14:24:57.912118 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 2 14:24:57.912130 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 2 14:24:57.912139 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 2 14:24:57.912148 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 14:24:57.912157 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 2 14:24:57.912170 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 14:24:57.912179 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 2 14:24:57.912187 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 14:24:57.913542 kernel: efi: EFI v2.7 by EDK II Mar 2 14:24:57.913557 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Mar 2 14:24:57.913568 kernel: random: crng init done Mar 2 14:24:57.913579 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 2 14:24:57.913590 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 2 14:24:57.913601 kernel: secureboot: Secure boot disabled Mar 2 14:24:57.913610 kernel: SMBIOS 2.8 present. Mar 2 14:24:57.913619 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 2 14:24:57.913633 kernel: DMI: Memory slots populated: 1/1 Mar 2 14:24:57.913641 kernel: Hypervisor detected: KVM Mar 2 14:24:57.913650 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 2 14:24:57.913659 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 14:24:57.913668 kernel: kvm-clock: using sched offset of 16883616008 cycles Mar 2 14:24:57.913678 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 14:24:57.913687 kernel: tsc: Detected 2445.426 MHz processor Mar 2 14:24:57.913697 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 14:24:57.913709 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 14:24:57.913718 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 2 14:24:57.913727 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 2 14:24:57.913741 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 14:24:57.913749 kernel: Using GB pages for direct mapping Mar 2 14:24:57.913759 kernel: ACPI: Early table checksum verification disabled Mar 2 14:24:57.913768 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 2 14:24:57.913777 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 2 14:24:57.913786 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 14:24:57.913795 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 14:24:57.913804 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 2 14:24:57.913819 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 14:24:57.913830 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 14:24:57.913842 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 14:24:57.913851 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 14:24:57.913860 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 2 14:24:57.913869 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 2 14:24:57.913878 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 2 14:24:57.913887 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 2 14:24:57.913896 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 2 14:24:57.913909 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 2 14:24:57.913918 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 2 14:24:57.913927 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 2 14:24:57.914039 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 2 14:24:57.914050 kernel: No NUMA configuration found Mar 2 14:24:57.914059 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 2 14:24:57.914068 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Mar 2 14:24:57.914080 kernel: Zone ranges: Mar 2 14:24:57.914090 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 14:24:57.914103 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 2 14:24:57.914112 kernel: Normal empty Mar 2 14:24:57.914121 kernel: Device empty Mar 2 14:24:57.914130 kernel: Movable zone start for each node Mar 2 14:24:57.914139 kernel: Early memory node ranges Mar 2 14:24:57.914148 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 2 14:24:57.914157 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 2 14:24:57.914166 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 2 14:24:57.914175 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 2 14:24:57.914185 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 2 14:24:57.914811 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 2 14:24:57.914822 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Mar 2 14:24:57.914831 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Mar 2 14:24:57.914840 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 2 14:24:57.914850 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 14:24:57.914872 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 2 14:24:57.914888 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 2 14:24:57.914898 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 14:24:57.914907 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 2 14:24:57.914916 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 2 14:24:57.914926 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 2 14:24:57.915041 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 2 14:24:57.915052 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 2 14:24:57.915062 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 14:24:57.915071 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 14:24:57.915080 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 14:24:57.915094 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 14:24:57.915103 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 14:24:57.915113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 14:24:57.915124 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 14:24:57.915135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 14:24:57.915145 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 14:24:57.915154 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 14:24:57.915163 kernel: TSC deadline timer available Mar 2 14:24:57.915173 kernel: CPU topo: Max. logical packages: 1 Mar 2 14:24:57.915185 kernel: CPU topo: Max. logical dies: 1 Mar 2 14:24:57.915373 kernel: CPU topo: Max. dies per package: 1 Mar 2 14:24:57.915391 kernel: CPU topo: Max. threads per core: 1 Mar 2 14:24:57.915401 kernel: CPU topo: Num. cores per package: 4 Mar 2 14:24:57.915410 kernel: CPU topo: Num. threads per package: 4 Mar 2 14:24:57.915420 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 2 14:24:57.915429 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 14:24:57.915438 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 14:24:57.915448 kernel: kvm-guest: setup PV sched yield Mar 2 14:24:57.915457 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 2 14:24:57.915474 kernel: Booting paravirtualized kernel on KVM Mar 2 14:24:57.915486 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 14:24:57.915496 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 14:24:57.915505 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 2 14:24:57.915514 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 2 14:24:57.915524 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 14:24:57.915533 kernel: kvm-guest: PV spinlocks enabled Mar 2 14:24:57.915543 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 14:24:57.915553 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 14:24:57.915567 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 14:24:57.915577 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 14:24:57.915588 kernel: Fallback order for Node 0: 0 Mar 2 14:24:57.915599 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Mar 2 14:24:57.915609 kernel: Policy zone: DMA32 Mar 2 14:24:57.915618 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 14:24:57.915628 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 14:24:57.915637 kernel: ftrace: allocating 40099 entries in 157 pages Mar 2 14:24:57.915650 kernel: ftrace: allocated 157 pages with 5 groups Mar 2 14:24:57.915660 kernel: Dynamic Preempt: voluntary Mar 2 14:24:57.915669 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 14:24:57.915680 kernel: rcu: RCU event tracing is enabled. Mar 2 14:24:57.915690 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 14:24:57.915701 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 14:24:57.915713 kernel: Rude variant of Tasks RCU enabled. Mar 2 14:24:57.915725 kernel: Tracing variant of Tasks RCU enabled. Mar 2 14:24:57.915734 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 14:24:57.915748 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 14:24:57.915758 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 14:24:57.915768 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 14:24:57.915778 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 14:24:57.915787 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 14:24:57.915796 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 14:24:57.915805 kernel: Console: colour dummy device 80x25 Mar 2 14:24:57.915815 kernel: printk: legacy console [ttyS0] enabled Mar 2 14:24:57.915826 kernel: ACPI: Core revision 20240827 Mar 2 14:24:57.915843 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 14:24:57.915853 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 14:24:57.915862 kernel: x2apic enabled Mar 2 14:24:57.915871 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 14:24:57.915881 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 14:24:57.915891 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 14:24:57.915900 kernel: kvm-guest: setup PV IPIs Mar 2 14:24:57.915910 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 14:24:57.915919 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 2 14:24:57.915933 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 2 14:24:57.916050 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 14:24:57.916060 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 14:24:57.916070 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 14:24:57.916082 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 14:24:57.916095 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 14:24:57.916104 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 14:24:57.916114 kernel: Speculative Store Bypass: Vulnerable Mar 2 14:24:57.916123 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 14:24:57.916138 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 14:24:57.916148 kernel: active return thunk: srso_alias_return_thunk Mar 2 14:24:57.916157 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 14:24:57.916167 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 14:24:57.916176 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 14:24:57.916186 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 14:24:57.916354 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 14:24:57.916367 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 14:24:57.916381 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 14:24:57.916392 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 14:24:57.916405 kernel: Freeing SMP alternatives memory: 32K Mar 2 14:24:57.916414 kernel: pid_max: default: 32768 minimum: 301 Mar 2 14:24:57.916424 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 2 14:24:57.916433 kernel: landlock: Up and running. Mar 2 14:24:57.916442 kernel: SELinux: Initializing. Mar 2 14:24:57.916452 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 14:24:57.916461 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 14:24:57.916475 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 14:24:57.916484 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 14:24:57.916494 kernel: signal: max sigframe size: 1776 Mar 2 14:24:57.916504 kernel: rcu: Hierarchical SRCU implementation. Mar 2 14:24:57.916516 kernel: rcu: Max phase no-delay instances is 400. Mar 2 14:24:57.916527 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 2 14:24:57.916537 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 14:24:57.916546 kernel: smp: Bringing up secondary CPUs ... Mar 2 14:24:57.916555 kernel: smpboot: x86: Booting SMP configuration: Mar 2 14:24:57.916568 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 14:24:57.916578 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 14:24:57.916587 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 2 14:24:57.916598 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46192K init, 2568K bss, 145388K reserved, 0K cma-reserved) Mar 2 14:24:57.916607 kernel: devtmpfs: initialized Mar 2 14:24:57.916617 kernel: x86/mm: Memory block size: 128MB Mar 2 14:24:57.916628 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 2 14:24:57.916638 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 2 14:24:57.916648 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 2 14:24:57.916662 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 2 14:24:57.916672 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Mar 2 14:24:57.916683 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 2 14:24:57.916693 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 14:24:57.916704 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 14:24:57.916716 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 14:24:57.916727 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 14:24:57.916737 kernel: audit: initializing netlink subsys (disabled) Mar 2 14:24:57.916746 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 14:24:57.916759 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 14:24:57.916769 kernel: audit: type=2000 audit(1772461478.100:1): state=initialized audit_enabled=0 res=1 Mar 2 14:24:57.916778 kernel: cpuidle: using governor menu Mar 2 14:24:57.916788 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 14:24:57.916800 kernel: dca service started, version 1.12.1 Mar 2 14:24:57.916811 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 2 14:24:57.916823 kernel: PCI: Using configuration type 1 for base access Mar 2 14:24:57.916833 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 14:24:57.916843 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 14:24:57.916857 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 14:24:57.916866 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 14:24:57.916876 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 14:24:57.916885 kernel: ACPI: Added _OSI(Module Device) Mar 2 14:24:57.916894 kernel: ACPI: Added _OSI(Processor Device) Mar 2 14:24:57.916904 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 14:24:57.916913 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 14:24:57.916926 kernel: ACPI: Interpreter enabled Mar 2 14:24:57.917019 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 14:24:57.917034 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 14:24:57.917043 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 14:24:57.917057 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 14:24:57.917069 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 14:24:57.917079 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 14:24:57.917517 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 14:24:57.917688 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 14:24:57.917855 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 14:24:57.917870 kernel: PCI host bridge to bus 0000:00 Mar 2 14:24:57.919781 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 14:24:57.920045 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 14:24:57.921039 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 14:24:57.921898 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 2 14:24:57.922163 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 2 14:24:57.922665 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 2 14:24:57.922832 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 14:24:57.923118 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 2 14:24:57.925617 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 2 14:24:57.925788 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Mar 2 14:24:57.926588 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Mar 2 14:24:57.926769 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 2 14:24:57.927050 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 14:24:57.927924 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 35156 usecs Mar 2 14:24:57.929573 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 2 14:24:57.929748 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Mar 2 14:24:57.929914 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Mar 2 14:24:57.930186 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Mar 2 14:24:57.936688 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 2 14:24:57.936869 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Mar 2 14:24:57.937138 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Mar 2 14:24:57.937562 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Mar 2 14:24:57.937749 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 2 14:24:57.937915 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Mar 2 14:24:57.938189 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Mar 2 14:24:57.938531 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 2 14:24:57.938694 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Mar 2 14:24:57.938875 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 2 14:24:57.939149 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 14:24:57.939512 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 34179 usecs Mar 2 14:24:57.939694 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 2 14:24:57.939866 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Mar 2 14:24:57.941922 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Mar 2 14:24:57.942568 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 2 14:24:57.942734 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Mar 2 14:24:57.942749 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 14:24:57.942759 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 14:24:57.942770 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 14:24:57.942782 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 14:24:57.942795 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 14:24:57.942810 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 14:24:57.942820 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 14:24:57.942830 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 14:24:57.942839 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 14:24:57.942849 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 14:24:57.942858 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 14:24:57.942868 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 14:24:57.942878 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 14:24:57.942887 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 14:24:57.942903 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 14:24:57.942915 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 14:24:57.942925 kernel: iommu: Default domain type: Translated Mar 2 14:24:57.943029 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 14:24:57.943045 kernel: efivars: Registered efivars operations Mar 2 14:24:57.943055 kernel: PCI: Using ACPI for IRQ routing Mar 2 14:24:57.943065 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 14:24:57.943075 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 2 14:24:57.943084 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 2 14:24:57.943098 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Mar 2 14:24:57.943107 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Mar 2 14:24:57.943116 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 2 14:24:57.943126 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 2 14:24:57.943135 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Mar 2 14:24:57.943146 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 2 14:24:57.943741 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 14:24:57.943908 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 14:24:57.944181 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 14:24:57.945530 kernel: vgaarb: loaded Mar 2 14:24:57.945551 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 14:24:57.945562 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 14:24:57.945571 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 14:24:57.945581 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 14:24:57.945591 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 14:24:57.945603 kernel: pnp: PnP ACPI init Mar 2 14:24:57.945785 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 2 14:24:57.945807 kernel: pnp: PnP ACPI: found 6 devices Mar 2 14:24:57.945817 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 14:24:57.945828 kernel: NET: Registered PF_INET protocol family Mar 2 14:24:57.945840 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 14:24:57.945853 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 14:24:57.945884 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 14:24:57.945898 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 14:24:57.945908 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 14:24:57.945924 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 14:24:57.946029 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 14:24:57.946043 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 14:24:57.946054 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 14:24:57.946064 kernel: NET: Registered PF_XDP protocol family Mar 2 14:24:57.947649 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Mar 2 14:24:57.947824 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Mar 2 14:24:57.948079 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 14:24:57.948928 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 14:24:57.949186 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 14:24:57.951117 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 2 14:24:57.951607 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 2 14:24:57.951758 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 2 14:24:57.951775 kernel: PCI: CLS 0 bytes, default 64 Mar 2 14:24:57.951786 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 2 14:24:57.951797 kernel: Initialise system trusted keyrings Mar 2 14:24:57.951816 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 14:24:57.951828 kernel: Key type asymmetric registered Mar 2 14:24:57.951841 kernel: Asymmetric key parser 'x509' registered Mar 2 14:24:57.951852 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 2 14:24:57.951862 kernel: io scheduler mq-deadline registered Mar 2 14:24:57.951872 kernel: io scheduler kyber registered Mar 2 14:24:57.951882 kernel: io scheduler bfq registered Mar 2 14:24:57.951892 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 14:24:57.951903 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 14:24:57.951921 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 14:24:57.951934 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 14:24:57.952050 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 14:24:57.952061 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 14:24:57.952071 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 14:24:57.952081 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 14:24:57.952095 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 14:24:57.952583 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 14:24:57.952605 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 14:24:57.952757 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 14:24:57.952919 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T14:24:53 UTC (1772461493) Mar 2 14:24:57.954022 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 2 14:24:57.954046 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 14:24:57.954058 kernel: efifb: probing for efifb Mar 2 14:24:57.954073 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 2 14:24:57.954083 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 2 14:24:57.954093 kernel: efifb: scrolling: redraw Mar 2 14:24:57.954103 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 2 14:24:57.954113 kernel: Console: switching to colour frame buffer device 160x50 Mar 2 14:24:57.954123 kernel: fb0: EFI VGA frame buffer device Mar 2 14:24:57.954133 kernel: pstore: Using crash dump compression: deflate Mar 2 14:24:57.954144 kernel: pstore: Registered efi_pstore as persistent store backend Mar 2 14:24:57.954156 kernel: NET: Registered PF_INET6 protocol family Mar 2 14:24:57.954174 kernel: Segment Routing with IPv6 Mar 2 14:24:57.954184 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 14:24:57.954194 kernel: NET: Registered PF_PACKET protocol family Mar 2 14:24:57.954373 kernel: Key type dns_resolver registered Mar 2 14:24:57.954384 kernel: IPI shorthand broadcast: enabled Mar 2 14:24:57.954397 kernel: sched_clock: Marking stable (12893066367, 2232629233)->(17134413807, -2008718207) Mar 2 14:24:57.954409 kernel: registered taskstats version 1 Mar 2 14:24:57.954419 kernel: Loading compiled-in X.509 certificates Mar 2 14:24:57.954429 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: ca052fea375a75b056ebd4154b64794dffb70b96' Mar 2 14:24:57.954442 kernel: Demotion targets for Node 0: null Mar 2 14:24:57.954452 kernel: Key type .fscrypt registered Mar 2 14:24:57.954463 kernel: Key type fscrypt-provisioning registered Mar 2 14:24:57.954472 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 14:24:57.954483 kernel: ima: Allocated hash algorithm: sha1 Mar 2 14:24:57.954493 kernel: ima: No architecture policies found Mar 2 14:24:57.954504 kernel: clk: Disabling unused clocks Mar 2 14:24:57.954517 kernel: Warning: unable to open an initial console. Mar 2 14:24:57.954528 kernel: Freeing unused kernel image (initmem) memory: 46192K Mar 2 14:24:57.954543 kernel: Write protecting the kernel read-only data: 40960k Mar 2 14:24:57.954553 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 2 14:24:57.954563 kernel: Run /init as init process Mar 2 14:24:57.954572 kernel: with arguments: Mar 2 14:24:57.954583 kernel: /init Mar 2 14:24:57.954592 kernel: with environment: Mar 2 14:24:57.954602 kernel: HOME=/ Mar 2 14:24:57.954612 kernel: TERM=linux Mar 2 14:24:57.954626 kernel: hrtimer: interrupt took 5680435 ns Mar 2 14:24:57.954642 systemd[1]: Successfully made /usr/ read-only. Mar 2 14:24:57.954657 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 2 14:24:57.954668 systemd[1]: Detected virtualization kvm. Mar 2 14:24:57.954679 systemd[1]: Detected architecture x86-64. Mar 2 14:24:57.954690 systemd[1]: Running in initrd. Mar 2 14:24:57.954701 systemd[1]: No hostname configured, using default hostname. Mar 2 14:24:57.954712 systemd[1]: Hostname set to . Mar 2 14:24:57.954726 systemd[1]: Initializing machine ID from VM UUID. Mar 2 14:24:57.954737 systemd[1]: Queued start job for default target initrd.target. Mar 2 14:24:57.954749 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 14:24:57.954763 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 14:24:57.954774 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 14:24:57.954785 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 14:24:57.954795 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 14:24:57.954811 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 14:24:57.954825 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 14:24:57.954836 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 14:24:57.954847 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 14:24:57.954858 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 14:24:57.954871 systemd[1]: Reached target paths.target - Path Units. Mar 2 14:24:57.954884 systemd[1]: Reached target slices.target - Slice Units. Mar 2 14:24:57.954894 systemd[1]: Reached target swap.target - Swaps. Mar 2 14:24:57.954909 systemd[1]: Reached target timers.target - Timer Units. Mar 2 14:24:57.954920 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 14:24:57.954930 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 14:24:57.955657 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 14:24:57.955671 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 2 14:24:57.955682 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 14:24:57.955693 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 14:24:57.955703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 14:24:57.955714 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 14:24:57.955730 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 14:24:57.955741 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 14:24:57.955752 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 14:24:57.955764 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 2 14:24:57.955776 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 14:24:57.955788 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 14:24:57.955800 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 14:24:57.955812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 14:24:57.955865 systemd-journald[203]: Collecting audit messages is disabled. Mar 2 14:24:57.955893 systemd-journald[203]: Journal started Mar 2 14:24:57.955919 systemd-journald[203]: Runtime Journal (/run/log/journal/da3c42c2555a4fdeab0797ab74eca4d6) is 6M, max 48.1M, 42.1M free. Mar 2 14:24:58.003335 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 14:24:58.005147 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 14:24:58.036920 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 14:24:58.144180 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 14:24:58.169641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 14:24:58.207035 systemd-modules-load[205]: Inserted module 'overlay' Mar 2 14:24:58.278647 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 14:24:58.328631 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 14:24:58.458486 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 14:24:58.632172 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 14:24:58.676531 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 14:24:58.719578 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 14:24:58.794825 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 2 14:24:58.967548 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 14:24:59.013021 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 14:24:59.165788 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 14:24:59.511908 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 14:24:59.696548 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 14:24:59.737038 kernel: Bridge firewalling registered Mar 2 14:24:59.732639 systemd-modules-load[205]: Inserted module 'br_netfilter' Mar 2 14:24:59.738490 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 14:24:59.777731 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 14:25:00.042915 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 14:25:00.209882 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 14:25:00.364571 kernel: SCSI subsystem initialized Mar 2 14:25:00.481780 kernel: Loading iSCSI transport class v2.0-870. Mar 2 14:25:00.657445 systemd-resolved[338]: Positive Trust Anchors: Mar 2 14:25:00.657556 systemd-resolved[338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 14:25:00.657592 systemd-resolved[338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 14:25:00.664453 systemd-resolved[338]: Defaulting to hostname 'linux'. Mar 2 14:25:00.673555 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 14:25:00.700644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 14:25:01.093625 kernel: iscsi: registered transport (tcp) Mar 2 14:25:01.233593 kernel: iscsi: registered transport (qla4xxx) Mar 2 14:25:01.233678 kernel: QLogic iSCSI HBA Driver Mar 2 14:25:01.452831 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 14:25:01.592447 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 14:25:01.631745 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 14:25:02.122492 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 14:25:02.158609 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 14:25:02.478904 kernel: raid6: avx2x4 gen() 11310 MB/s Mar 2 14:25:02.536831 kernel: raid6: avx2x2 gen() 1489 MB/s Mar 2 14:25:02.597822 kernel: raid6: avx2x1 gen() 2355 MB/s Mar 2 14:25:02.597896 kernel: raid6: using algorithm avx2x4 gen() 11310 MB/s Mar 2 14:25:02.648598 kernel: raid6: .... xor() 1009 MB/s, rmw enabled Mar 2 14:25:02.648673 kernel: raid6: using avx2x2 recovery algorithm Mar 2 14:25:02.777892 kernel: xor: automatically using best checksumming function avx Mar 2 14:25:04.704861 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 14:25:04.799602 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 14:25:04.884444 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 14:25:05.112905 systemd-udevd[456]: Using default interface naming scheme 'v255'. Mar 2 14:25:05.153446 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 14:25:05.260090 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 14:25:05.513687 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Mar 2 14:25:05.771401 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 14:25:05.823382 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 14:25:06.137047 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 14:25:06.210462 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 14:25:06.564851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 14:25:06.565108 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 14:25:06.687848 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 14:25:06.689118 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 14:25:06.761438 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 14:25:06.723811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 14:25:06.838068 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 14:25:06.838122 kernel: GPT:9289727 != 19775487 Mar 2 14:25:06.838508 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 14:25:06.838525 kernel: GPT:9289727 != 19775487 Mar 2 14:25:06.838542 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 14:25:06.838556 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 14:25:06.860889 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 14:25:06.912083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 14:25:06.912730 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 14:25:06.998850 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 14:25:07.014118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 14:25:07.077419 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 14:25:07.346943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 14:25:07.448878 kernel: libata version 3.00 loaded. Mar 2 14:25:07.597530 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 14:25:07.618630 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 14:25:07.721892 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 2 14:25:07.722596 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 2 14:25:07.722804 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 14:25:07.756951 kernel: AES CTR mode by8 optimization enabled Mar 2 14:25:07.773855 kernel: scsi host0: ahci Mar 2 14:25:07.802082 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 14:25:07.864642 kernel: scsi host1: ahci Mar 2 14:25:07.864901 kernel: scsi host2: ahci Mar 2 14:25:07.865090 kernel: scsi host3: ahci Mar 2 14:25:07.865912 kernel: scsi host4: ahci Mar 2 14:25:07.907613 kernel: scsi host5: ahci Mar 2 14:25:07.907905 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 14:25:08.118604 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Mar 2 14:25:08.118643 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Mar 2 14:25:08.118668 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Mar 2 14:25:08.118682 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Mar 2 14:25:08.118694 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Mar 2 14:25:08.118707 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Mar 2 14:25:08.126079 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 14:25:08.191988 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 14:25:08.274105 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 2 14:25:08.288767 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 14:25:08.348646 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 14:25:08.399969 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 14:25:08.418423 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 14:25:08.459922 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 14:25:08.459994 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 14:25:08.484090 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 14:25:08.501530 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 14:25:08.538568 kernel: ata3.00: LPM support broken, forcing max_power Mar 2 14:25:08.538634 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 14:25:08.538654 kernel: ata3.00: applying bridge limits Mar 2 14:25:08.556573 disk-uuid[625]: Primary Header is updated. Mar 2 14:25:08.556573 disk-uuid[625]: Secondary Entries is updated. Mar 2 14:25:08.556573 disk-uuid[625]: Secondary Header is updated. Mar 2 14:25:08.729556 kernel: ata3.00: LPM support broken, forcing max_power Mar 2 14:25:08.729595 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 14:25:08.729612 kernel: ata3.00: configured for UDMA/100 Mar 2 14:25:08.729627 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 14:25:08.947495 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 14:25:08.949530 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 14:25:08.983861 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 14:25:09.706444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 14:25:09.728793 disk-uuid[626]: The operation has completed successfully. Mar 2 14:25:09.927075 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 14:25:09.929605 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 14:25:09.962499 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 14:25:10.145985 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 14:25:10.153475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 14:25:10.153526 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 14:25:10.156572 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 14:25:10.258664 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 14:25:10.455625 sh[647]: Success Mar 2 14:25:10.488868 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 14:25:10.670635 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 14:25:10.670716 kernel: device-mapper: uevent: version 1.0.3 Mar 2 14:25:10.681368 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 2 14:25:10.870542 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 2 14:25:11.053928 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 14:25:11.129681 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 14:25:11.230130 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 14:25:11.299036 kernel: BTRFS: device fsid 760529e6-8e55-47fc-ad5a-c1c1d184e50a devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (666) Mar 2 14:25:11.320532 kernel: BTRFS info (device dm-0): first mount of filesystem 760529e6-8e55-47fc-ad5a-c1c1d184e50a Mar 2 14:25:11.320605 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 14:25:11.469470 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 2 14:25:11.469621 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 2 14:25:11.484977 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 14:25:11.494980 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 2 14:25:11.495082 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 14:25:11.561462 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 14:25:11.614669 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 14:25:11.927881 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (699) Mar 2 14:25:11.974474 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 14:25:11.974549 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 14:25:12.094920 kernel: BTRFS info (device vda6): turning on async discard Mar 2 14:25:12.094993 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 14:25:12.166443 kernel: BTRFS info (device vda6): last unmount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 14:25:12.219657 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 14:25:12.260472 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 14:25:12.878821 ignition[768]: Ignition 2.22.0 Mar 2 14:25:12.878930 ignition[768]: Stage: fetch-offline Mar 2 14:25:12.905923 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 14:25:12.878971 ignition[768]: no configs at "/usr/lib/ignition/base.d" Mar 2 14:25:12.953011 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 14:25:12.878985 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 14:25:12.879091 ignition[768]: parsed url from cmdline: "" Mar 2 14:25:12.879097 ignition[768]: no config URL provided Mar 2 14:25:12.879105 ignition[768]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 14:25:12.879117 ignition[768]: no config at "/usr/lib/ignition/user.ign" Mar 2 14:25:12.879145 ignition[768]: op(1): [started] loading QEMU firmware config module Mar 2 14:25:12.879630 ignition[768]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 14:25:12.957753 ignition[768]: op(1): [finished] loading QEMU firmware config module Mar 2 14:25:13.332664 systemd-networkd[843]: lo: Link UP Mar 2 14:25:13.334068 systemd-networkd[843]: lo: Gained carrier Mar 2 14:25:13.388125 systemd-networkd[843]: Enumeration completed Mar 2 14:25:13.412983 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 14:25:13.461038 systemd[1]: Reached target network.target - Network. Mar 2 14:25:13.497763 systemd-networkd[843]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 14:25:13.497846 systemd-networkd[843]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 14:25:13.543683 systemd-networkd[843]: eth0: Link UP Mar 2 14:25:13.602556 systemd-networkd[843]: eth0: Gained carrier Mar 2 14:25:13.602584 systemd-networkd[843]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 14:25:13.711977 systemd-networkd[843]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 14:25:14.663719 ignition[768]: parsing config with SHA512: b9cf6dc3375c6214336e845589816cf3c399b22910127610223f0c7360feaba91457b9737aef6dd4a00a9f59c91e360b1930c96fe22dee5549da439c86f190df Mar 2 14:25:14.688831 systemd-networkd[843]: eth0: Gained IPv6LL Mar 2 14:25:14.705786 unknown[768]: fetched base config from "system" Mar 2 14:25:14.706477 ignition[768]: fetch-offline: fetch-offline passed Mar 2 14:25:14.705796 unknown[768]: fetched user config from "qemu" Mar 2 14:25:14.706756 ignition[768]: Ignition finished successfully Mar 2 14:25:14.774573 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 14:25:14.831035 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 14:25:14.838551 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 14:25:15.090654 ignition[848]: Ignition 2.22.0 Mar 2 14:25:15.092685 ignition[848]: Stage: kargs Mar 2 14:25:15.092890 ignition[848]: no configs at "/usr/lib/ignition/base.d" Mar 2 14:25:15.092905 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 14:25:15.133055 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 14:25:15.104038 ignition[848]: kargs: kargs passed Mar 2 14:25:15.104121 ignition[848]: Ignition finished successfully Mar 2 14:25:15.216700 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 14:25:16.111503 ignition[856]: Ignition 2.22.0 Mar 2 14:25:16.113609 ignition[856]: Stage: disks Mar 2 14:25:16.138475 ignition[856]: no configs at "/usr/lib/ignition/base.d" Mar 2 14:25:16.138498 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 14:25:16.140149 ignition[856]: disks: disks passed Mar 2 14:25:16.141713 ignition[856]: Ignition finished successfully Mar 2 14:25:16.264690 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 14:25:16.290942 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 14:25:16.322052 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 14:25:16.387495 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 14:25:16.395118 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 14:25:16.485753 systemd[1]: Reached target basic.target - Basic System. Mar 2 14:25:16.556060 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 14:25:16.772677 systemd-fsck[866]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 2 14:25:16.819169 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 14:25:16.855632 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 14:25:18.210372 kernel: EXT4-fs (vda9): mounted filesystem 9d55f1a4-66ad-43d6-b325-f6b8d2d08c3e r/w with ordered data mode. Quota mode: none. Mar 2 14:25:18.213729 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 14:25:18.229583 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 14:25:18.270169 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 14:25:18.314038 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 14:25:18.332456 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 14:25:18.332528 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 14:25:18.332570 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 14:25:18.503868 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (875) Mar 2 14:25:18.447090 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 14:25:18.593576 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 14:25:18.593610 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 14:25:18.553606 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 14:25:18.661364 kernel: BTRFS info (device vda6): turning on async discard Mar 2 14:25:18.661417 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 14:25:18.667125 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 14:25:18.995639 initrd-setup-root[899]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 14:25:19.051864 initrd-setup-root[906]: cut: /sysroot/etc/group: No such file or directory Mar 2 14:25:19.103542 initrd-setup-root[913]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 14:25:19.148047 initrd-setup-root[920]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 14:25:20.004889 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 14:25:20.031679 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 14:25:20.090699 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 14:25:20.128721 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 14:25:20.160581 kernel: BTRFS info (device vda6): last unmount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 14:25:20.326941 ignition[987]: INFO : Ignition 2.22.0 Mar 2 14:25:20.326941 ignition[987]: INFO : Stage: mount Mar 2 14:25:20.326941 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 14:25:20.326941 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 14:25:20.438131 ignition[987]: INFO : mount: mount passed Mar 2 14:25:20.438131 ignition[987]: INFO : Ignition finished successfully Mar 2 14:25:20.330951 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 14:25:20.351781 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 14:25:20.392898 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 14:25:20.531473 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 14:25:20.641524 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1001) Mar 2 14:25:20.671125 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 14:25:20.671399 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 14:25:20.756896 kernel: BTRFS info (device vda6): turning on async discard Mar 2 14:25:20.756970 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 14:25:20.763040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 14:25:20.932830 ignition[1018]: INFO : Ignition 2.22.0 Mar 2 14:25:20.932830 ignition[1018]: INFO : Stage: files Mar 2 14:25:20.969081 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 14:25:20.969081 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 14:25:20.969081 ignition[1018]: DEBUG : files: compiled without relabeling support, skipping Mar 2 14:25:21.036557 ignition[1018]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 14:25:21.036557 ignition[1018]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 14:25:21.093762 ignition[1018]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 14:25:21.093762 ignition[1018]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 14:25:21.093762 ignition[1018]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 14:25:21.093762 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 14:25:21.093762 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 14:25:21.069936 unknown[1018]: wrote ssh authorized keys file for user: core Mar 2 14:25:21.279657 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 14:25:21.494576 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 14:25:21.536976 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 14:25:21.536976 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 2 14:25:21.684567 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 2 14:25:21.989931 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 14:25:22.031767 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 2 14:25:22.031767 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 14:25:22.031767 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 14:25:22.031767 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 14:25:22.031767 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 14:25:22.031767 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 14:25:22.031767 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 14:25:22.031767 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 14:25:22.314874 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 14:25:22.314874 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 14:25:22.314874 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 14:25:22.314874 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 14:25:22.314874 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 14:25:22.314874 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 2 14:25:22.314874 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 2 14:25:23.520468 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 14:25:23.520468 ignition[1018]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 2 14:25:23.592173 ignition[1018]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 14:25:23.592173 ignition[1018]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 14:25:23.592173 ignition[1018]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 2 14:25:23.592173 ignition[1018]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 2 14:25:23.592173 ignition[1018]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 14:25:23.592173 ignition[1018]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 14:25:23.592173 ignition[1018]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 2 14:25:23.592173 ignition[1018]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 14:25:23.807534 ignition[1018]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 14:25:23.852464 ignition[1018]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 14:25:23.852464 ignition[1018]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 14:25:23.852464 ignition[1018]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 2 14:25:23.852464 ignition[1018]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 14:25:23.852464 ignition[1018]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 14:25:23.852464 ignition[1018]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 14:25:23.852464 ignition[1018]: INFO : files: files passed Mar 2 14:25:23.852464 ignition[1018]: INFO : Ignition finished successfully Mar 2 14:25:23.858447 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 14:25:23.909178 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 14:25:24.001859 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 14:25:24.022553 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 14:25:24.058637 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 14:25:24.100125 initrd-setup-root-after-ignition[1048]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 14:25:24.121015 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 14:25:24.121015 initrd-setup-root-after-ignition[1050]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 14:25:24.148645 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 14:25:24.162936 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 14:25:24.184797 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 14:25:24.204580 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 14:25:24.383579 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 14:25:24.385535 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 14:25:24.395010 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 14:25:24.395094 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 14:25:24.395432 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 14:25:24.400735 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 14:25:24.585802 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 14:25:24.618609 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 14:25:24.681797 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 14:25:24.688556 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 14:25:24.723725 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 14:25:24.730167 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 14:25:24.732114 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 14:25:24.736514 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 14:25:24.736716 systemd[1]: Stopped target basic.target - Basic System. Mar 2 14:25:24.736849 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 14:25:24.736975 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 14:25:24.737102 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 14:25:24.737434 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 2 14:25:24.737575 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 14:25:24.737710 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 14:25:24.737913 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 14:25:24.738051 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 14:25:24.738193 systemd[1]: Stopped target swap.target - Swaps. Mar 2 14:25:24.738475 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 14:25:24.738637 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 14:25:24.738971 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 14:25:24.739121 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 14:25:24.742857 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 14:25:24.746754 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 14:25:24.856651 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 14:25:24.856915 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 14:25:24.879520 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 14:25:24.879816 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 14:25:24.895031 systemd[1]: Stopped target paths.target - Path Units. Mar 2 14:25:24.910430 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 14:25:24.917448 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 14:25:24.940617 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 14:25:25.011767 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 14:25:25.018763 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 14:25:25.018927 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 14:25:25.029462 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 14:25:25.030944 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 14:25:25.038877 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 14:25:25.386120 ignition[1074]: INFO : Ignition 2.22.0 Mar 2 14:25:25.386120 ignition[1074]: INFO : Stage: umount Mar 2 14:25:25.386120 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 14:25:25.386120 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 14:25:25.386120 ignition[1074]: INFO : umount: umount passed Mar 2 14:25:25.386120 ignition[1074]: INFO : Ignition finished successfully Mar 2 14:25:25.040781 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 14:25:25.082577 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 14:25:25.082766 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 14:25:25.098969 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 14:25:25.174596 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 14:25:25.179475 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 14:25:25.193647 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 14:25:25.224391 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 14:25:25.224735 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 14:25:25.244763 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 14:25:25.244940 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 14:25:25.313885 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 14:25:25.314040 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 14:25:25.331051 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 14:25:25.351510 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 14:25:25.351671 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 14:25:25.368616 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 14:25:25.368838 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 14:25:25.397861 systemd[1]: Stopped target network.target - Network. Mar 2 14:25:25.423747 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 14:25:25.423887 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 14:25:25.433508 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 14:25:25.433617 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 14:25:25.442771 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 14:25:25.442884 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 14:25:25.456669 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 14:25:25.456772 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 14:25:25.468943 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 14:25:25.469045 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 14:25:25.480867 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 14:25:25.504620 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 14:25:25.536600 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 14:25:25.536821 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 14:25:25.596490 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 2 14:25:25.599823 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 14:25:25.600751 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 14:25:25.667871 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 2 14:25:25.672074 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 14:25:25.674064 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 14:25:25.698872 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 2 14:25:25.702051 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 2 14:25:25.728859 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 14:25:25.728999 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 14:25:25.757628 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 14:25:25.789064 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 14:25:25.789943 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 14:25:25.824867 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 14:25:25.824977 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 14:25:25.846537 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 14:25:25.846631 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 14:25:25.879616 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 14:25:25.924902 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 2 14:25:25.945941 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 14:25:25.947742 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 14:25:25.973757 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 14:25:25.973856 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 14:25:25.980073 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 14:25:25.980445 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 14:25:25.998606 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 14:25:25.998724 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 14:25:26.041168 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 14:25:26.041523 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 14:25:26.085998 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 14:25:26.086102 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 14:25:26.164701 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 14:25:26.199441 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 2 14:25:26.199597 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 14:25:26.241685 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 14:25:26.241782 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 14:25:26.260817 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 14:25:26.260913 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 14:25:26.524883 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 2 14:25:26.293596 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 14:25:26.293806 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 14:25:26.306120 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 14:25:26.306468 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 14:25:26.328623 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 14:25:26.341920 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 14:25:26.364829 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 2 14:25:26.364926 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 2 14:25:26.365001 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 14:25:26.446492 systemd[1]: Switching root. Mar 2 14:25:26.624547 systemd-journald[203]: Journal stopped Mar 2 14:25:30.211104 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 14:25:30.212021 kernel: SELinux: policy capability open_perms=1 Mar 2 14:25:30.212056 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 14:25:30.212073 kernel: SELinux: policy capability always_check_network=0 Mar 2 14:25:30.212089 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 14:25:30.212104 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 14:25:30.212119 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 14:25:30.212139 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 14:25:30.212162 kernel: SELinux: policy capability userspace_initial_context=0 Mar 2 14:25:30.212181 kernel: audit: type=1403 audit(1772461527.008:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 14:25:30.212404 systemd[1]: Successfully loaded SELinux policy in 156.771ms. Mar 2 14:25:30.212444 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 22.435ms. Mar 2 14:25:30.212463 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 2 14:25:30.212480 systemd[1]: Detected virtualization kvm. Mar 2 14:25:30.212497 systemd[1]: Detected architecture x86-64. Mar 2 14:25:30.212516 systemd[1]: Detected first boot. Mar 2 14:25:30.212533 systemd[1]: Initializing machine ID from VM UUID. Mar 2 14:25:30.212549 zram_generator::config[1120]: No configuration found. Mar 2 14:25:30.212567 kernel: Guest personality initialized and is inactive Mar 2 14:25:30.212583 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 2 14:25:30.212599 kernel: Initialized host personality Mar 2 14:25:30.212615 kernel: NET: Registered PF_VSOCK protocol family Mar 2 14:25:30.212631 systemd[1]: Populated /etc with preset unit settings. Mar 2 14:25:30.212650 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 2 14:25:30.212670 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 14:25:30.212686 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 14:25:30.212702 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 14:25:30.212719 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 14:25:30.212736 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 14:25:30.212753 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 14:25:30.212772 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 14:25:30.212789 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 14:25:30.212809 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 14:25:30.212826 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 14:25:30.212843 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 14:25:30.212859 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 14:25:30.212876 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 14:25:30.212893 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 14:25:30.212909 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 14:25:30.212926 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 14:25:30.212943 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 14:25:30.212964 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 14:25:30.212980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 14:25:30.212997 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 14:25:30.213014 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 14:25:30.213030 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 14:25:30.213046 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 14:25:30.213063 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 14:25:30.213080 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 14:25:30.213100 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 14:25:30.213116 systemd[1]: Reached target slices.target - Slice Units. Mar 2 14:25:30.213135 systemd[1]: Reached target swap.target - Swaps. Mar 2 14:25:30.213152 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 14:25:30.213168 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 14:25:30.213184 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 2 14:25:30.213336 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 14:25:30.213366 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 14:25:30.213383 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 14:25:30.213402 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 14:25:30.213419 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 14:25:30.213436 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 14:25:30.213453 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 14:25:30.213470 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 14:25:30.213487 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 14:25:30.213504 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 14:25:30.213526 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 14:25:30.213542 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 14:25:30.213563 systemd[1]: Reached target machines.target - Containers. Mar 2 14:25:30.213580 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 14:25:30.213596 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 14:25:30.213613 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 14:25:30.213629 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 14:25:30.213648 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 14:25:30.213665 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 14:25:30.213681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 14:25:30.213700 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 14:25:30.213717 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 14:25:30.213734 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 14:25:30.213850 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 14:25:30.213869 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 14:25:30.213885 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 14:25:30.213902 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 14:25:30.213919 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 14:25:30.213941 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 14:25:30.213958 kernel: fuse: init (API version 7.41) Mar 2 14:25:30.213974 kernel: loop: module loaded Mar 2 14:25:30.213990 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 14:25:30.214007 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 14:25:30.214023 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 14:25:30.214040 kernel: ACPI: bus type drm_connector registered Mar 2 14:25:30.214088 systemd-journald[1205]: Collecting audit messages is disabled. Mar 2 14:25:30.214127 systemd-journald[1205]: Journal started Mar 2 14:25:30.214157 systemd-journald[1205]: Runtime Journal (/run/log/journal/da3c42c2555a4fdeab0797ab74eca4d6) is 6M, max 48.1M, 42.1M free. Mar 2 14:25:28.597859 systemd[1]: Queued start job for default target multi-user.target. Mar 2 14:25:28.639051 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 14:25:28.640444 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 14:25:28.641175 systemd[1]: systemd-journald.service: Consumed 2.617s CPU time. Mar 2 14:25:30.223369 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 2 14:25:30.264925 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 14:25:30.279933 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 14:25:30.280038 systemd[1]: Stopped verity-setup.service. Mar 2 14:25:30.299500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 14:25:30.308340 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 14:25:30.319731 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 14:25:30.333736 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 14:25:30.344802 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 14:25:30.355109 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 14:25:30.364076 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 14:25:30.372503 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 14:25:30.379504 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 14:25:30.389013 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 14:25:30.399411 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 14:25:30.400114 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 14:25:30.412080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 14:25:30.414758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 14:25:30.428712 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 14:25:30.430555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 14:25:30.443842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 14:25:30.446430 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 14:25:30.457919 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 14:25:30.458745 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 14:25:30.476479 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 14:25:30.477963 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 14:25:30.488594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 14:25:30.498093 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 14:25:30.523510 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 14:25:30.541731 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 2 14:25:30.552680 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 14:25:30.594588 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 14:25:30.614611 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 14:25:30.647764 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 14:25:30.660128 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 14:25:30.660476 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 14:25:30.675815 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 2 14:25:30.695341 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 14:25:30.703981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 14:25:30.726417 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 14:25:30.750467 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 14:25:30.763002 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 14:25:30.784042 systemd-journald[1205]: Time spent on flushing to /var/log/journal/da3c42c2555a4fdeab0797ab74eca4d6 is 41.380ms for 1066 entries. Mar 2 14:25:30.784042 systemd-journald[1205]: System Journal (/var/log/journal/da3c42c2555a4fdeab0797ab74eca4d6) is 8M, max 195.6M, 187.6M free. Mar 2 14:25:30.848101 systemd-journald[1205]: Received client request to flush runtime journal. Mar 2 14:25:30.768746 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 14:25:30.793340 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 14:25:30.795069 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 14:25:30.821164 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 14:25:30.836466 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 14:25:30.850023 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 14:25:30.864505 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 14:25:30.884924 kernel: loop0: detected capacity change from 0 to 128560 Mar 2 14:25:30.887040 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 14:25:30.901189 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 14:25:30.922806 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 14:25:30.940694 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 14:25:30.970791 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 2 14:25:30.983537 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 14:25:31.066819 kernel: loop1: detected capacity change from 0 to 110984 Mar 2 14:25:31.067684 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 14:25:31.078951 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 14:25:31.089100 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 2 14:25:31.106833 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 14:25:31.189492 kernel: loop2: detected capacity change from 0 to 219192 Mar 2 14:25:31.201888 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 2 14:25:31.201958 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 2 14:25:31.217995 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 14:25:31.286100 kernel: loop3: detected capacity change from 0 to 128560 Mar 2 14:25:31.348415 kernel: loop4: detected capacity change from 0 to 110984 Mar 2 14:25:31.414407 kernel: loop5: detected capacity change from 0 to 219192 Mar 2 14:25:31.505741 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 14:25:31.506899 (sd-merge)[1263]: Merged extensions into '/usr'. Mar 2 14:25:31.520147 systemd[1]: Reload requested from client PID 1241 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 14:25:31.520456 systemd[1]: Reloading... Mar 2 14:25:32.404471 zram_generator::config[1285]: No configuration found. Mar 2 14:25:33.080726 systemd[1]: Reloading finished in 1559 ms. Mar 2 14:25:33.093868 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 14:25:33.132773 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 14:25:33.157846 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 14:25:33.216954 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 14:25:33.296587 systemd[1]: Starting ensure-sysext.service... Mar 2 14:25:33.316161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 14:25:33.354763 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 14:25:33.410974 systemd[1]: Reload requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... Mar 2 14:25:33.413840 systemd[1]: Reloading... Mar 2 14:25:33.451821 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 2 14:25:33.451882 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 2 14:25:33.452814 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 14:25:33.453415 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 14:25:33.460527 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 14:25:33.461045 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Mar 2 14:25:33.462473 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Mar 2 14:25:33.483778 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 14:25:33.483842 systemd-tmpfiles[1328]: Skipping /boot Mar 2 14:25:33.549140 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Mar 2 14:25:33.567919 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 14:25:33.567990 systemd-tmpfiles[1328]: Skipping /boot Mar 2 14:25:33.658516 zram_generator::config[1355]: No configuration found. Mar 2 14:25:34.166358 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 2 14:25:34.180459 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 14:25:34.188451 kernel: ACPI: button: Power Button [PWRF] Mar 2 14:25:34.217123 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 14:25:34.217582 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 14:25:34.234531 systemd[1]: Reloading finished in 818 ms. Mar 2 14:25:34.253562 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 14:25:34.272630 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 2 14:25:34.273035 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 14:25:34.274941 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 14:25:34.281829 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 14:25:34.424355 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 14:25:34.427510 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 14:25:34.439704 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 14:25:34.453758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 14:25:34.460678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 14:25:34.512587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 14:25:34.528960 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 14:25:34.539105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 14:25:34.557465 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 14:25:34.570522 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 14:25:34.576977 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 14:25:34.598609 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 14:25:34.626733 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 14:25:34.643388 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 14:25:34.655602 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 14:25:34.659943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 14:25:34.660753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 14:25:34.677963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 14:25:34.679104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 14:25:34.699812 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 14:25:34.700176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 14:25:34.735392 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 14:25:34.735679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 14:25:34.742099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 14:25:34.758751 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 14:25:34.784922 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 14:25:34.806716 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 14:25:34.807021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 14:25:34.807151 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 14:25:34.807456 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 14:25:34.883054 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 14:25:34.890486 systemd[1]: Finished ensure-sysext.service. Mar 2 14:25:34.922754 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 14:25:34.946022 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 14:25:34.964001 augenrules[1483]: No rules Mar 2 14:25:34.966728 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 14:25:34.967165 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 14:25:34.983561 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 14:25:34.983894 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 14:25:35.001072 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 14:25:35.001583 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 14:25:35.015759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 14:25:35.016039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 14:25:35.027715 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 14:25:35.028130 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 14:25:35.088456 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 14:25:35.170985 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 14:25:35.171184 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 14:25:35.177597 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 14:25:35.194584 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 14:25:35.298620 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 14:25:35.317859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 14:25:35.331926 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 14:25:35.375517 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 14:25:35.610570 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 14:25:35.629162 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 14:25:35.937974 systemd-networkd[1452]: lo: Link UP Mar 2 14:25:35.938039 systemd-networkd[1452]: lo: Gained carrier Mar 2 14:25:35.942751 systemd-networkd[1452]: Enumeration completed Mar 2 14:25:35.943054 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 14:25:35.945409 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 14:25:35.945473 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 14:25:35.950981 systemd-networkd[1452]: eth0: Link UP Mar 2 14:25:35.951537 systemd-networkd[1452]: eth0: Gained carrier Mar 2 14:25:35.951627 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 14:25:35.957931 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 2 14:25:35.969629 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 14:25:35.991065 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 14:25:36.008947 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 14:25:36.023811 systemd-networkd[1452]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 14:25:36.027563 systemd-timesyncd[1498]: Network configuration changed, trying to establish connection. Mar 2 14:25:36.034723 systemd-resolved[1454]: Positive Trust Anchors: Mar 2 14:25:36.034740 systemd-resolved[1454]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 14:25:36.034787 systemd-resolved[1454]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 14:25:36.599275 systemd-timesyncd[1498]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 14:25:36.599539 systemd-timesyncd[1498]: Initial clock synchronization to Mon 2026-03-02 14:25:36.598954 UTC. Mar 2 14:25:36.600661 systemd-resolved[1454]: Defaulting to hostname 'linux'. Mar 2 14:25:36.605125 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 14:25:36.614580 systemd[1]: Reached target network.target - Network. Mar 2 14:25:36.623521 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 14:25:36.637253 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 14:25:36.647249 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 14:25:36.665566 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 14:25:36.685547 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 2 14:25:36.704564 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 14:25:36.727112 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 14:25:36.743851 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 14:25:36.761454 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 14:25:36.761647 systemd[1]: Reached target paths.target - Path Units. Mar 2 14:25:36.773150 systemd[1]: Reached target timers.target - Timer Units. Mar 2 14:25:36.801583 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 14:25:36.826350 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 14:25:36.852533 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 2 14:25:36.855592 kernel: kvm_amd: TSC scaling supported Mar 2 14:25:36.856114 kernel: kvm_amd: Nested Virtualization enabled Mar 2 14:25:36.856159 kernel: kvm_amd: Nested Paging enabled Mar 2 14:25:36.856231 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 14:25:36.856258 kernel: kvm_amd: PMU virtualization is disabled Mar 2 14:25:36.878259 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 2 14:25:36.900902 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 2 14:25:36.925609 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 14:25:36.938608 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 2 14:25:36.954196 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 2 14:25:36.965265 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 14:25:36.983478 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 14:25:36.996512 systemd[1]: Reached target basic.target - Basic System. Mar 2 14:25:37.007649 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 14:25:37.008167 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 14:25:37.011241 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 14:25:37.029064 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 14:25:37.046113 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 14:25:37.073357 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 14:25:37.106347 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 14:25:37.118428 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 14:25:37.123405 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 2 14:25:37.157156 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 14:25:37.178660 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 14:25:37.209139 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 14:25:37.235826 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 14:25:37.278503 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 14:25:37.296185 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 14:25:37.297094 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 14:25:37.299003 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 14:25:37.330856 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 14:25:37.356402 jq[1527]: false Mar 2 14:25:37.360178 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 14:25:37.373556 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing passwd entry cache Mar 2 14:25:37.378849 oslogin_cache_refresh[1529]: Refreshing passwd entry cache Mar 2 14:25:37.379961 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 14:25:37.380469 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 14:25:37.406071 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 14:25:37.407526 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting users, quitting Mar 2 14:25:37.407526 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 2 14:25:37.407526 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing group entry cache Mar 2 14:25:37.406536 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 14:25:37.406383 oslogin_cache_refresh[1529]: Failure getting users, quitting Mar 2 14:25:37.406403 oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 2 14:25:37.406460 oslogin_cache_refresh[1529]: Refreshing group entry cache Mar 2 14:25:37.413903 extend-filesystems[1528]: Found /dev/vda6 Mar 2 14:25:37.425951 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting groups, quitting Mar 2 14:25:37.425951 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 2 14:25:37.425640 oslogin_cache_refresh[1529]: Failure getting groups, quitting Mar 2 14:25:37.425656 oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 2 14:25:37.429410 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 2 14:25:37.431077 extend-filesystems[1528]: Found /dev/vda9 Mar 2 14:25:37.430533 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 2 14:25:37.452430 extend-filesystems[1528]: Checking size of /dev/vda9 Mar 2 14:25:37.439296 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 14:25:37.499458 update_engine[1535]: I20260302 14:25:37.479378 1535 main.cc:92] Flatcar Update Engine starting Mar 2 14:25:37.448619 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 14:25:37.502180 extend-filesystems[1528]: Resized partition /dev/vda9 Mar 2 14:25:37.512088 jq[1536]: true Mar 2 14:25:37.509339 (ntainerd)[1549]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 14:25:37.541756 extend-filesystems[1565]: resize2fs 1.47.3 (8-Jul-2025) Mar 2 14:25:37.589116 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 14:25:37.648656 tar[1538]: linux-amd64/LICENSE Mar 2 14:25:37.649425 tar[1538]: linux-amd64/helm Mar 2 14:25:37.694291 dbus-daemon[1525]: [system] SELinux support is enabled Mar 2 14:25:37.718134 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 14:25:37.778135 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 14:25:37.778366 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 14:25:37.804331 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 14:25:37.804369 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 14:25:37.814854 jq[1562]: true Mar 2 14:25:38.072991 update_engine[1535]: I20260302 14:25:37.875597 1535 update_check_scheduler.cc:74] Next update check in 4m27s Mar 2 14:25:37.878070 systemd[1]: Started update-engine.service - Update Engine. Mar 2 14:25:37.962537 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 14:25:38.179609 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 14:25:38.230417 extend-filesystems[1565]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 14:25:38.230417 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 14:25:38.230417 extend-filesystems[1565]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 14:25:38.437280 kernel: EDAC MC: Ver: 3.0.0 Mar 2 14:25:38.233663 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 14:25:38.437429 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Mar 2 14:25:38.437574 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Mar 2 14:25:38.234326 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 14:25:38.331653 systemd-logind[1534]: Watching system buttons on /dev/input/event2 (Power Button) Mar 2 14:25:38.331853 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 14:25:38.338907 systemd-logind[1534]: New seat seat0. Mar 2 14:25:38.385265 systemd-networkd[1452]: eth0: Gained IPv6LL Mar 2 14:25:38.436221 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 14:25:38.464346 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 14:25:38.475256 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 14:25:38.551557 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 14:25:38.563628 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 14:25:38.578207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:25:38.594185 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 14:25:38.603204 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 14:25:38.657226 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 14:25:38.657597 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 14:25:38.667568 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 14:25:38.717540 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 14:25:38.834291 locksmithd[1571]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 14:25:38.844468 containerd[1549]: time="2026-03-02T14:25:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 2 14:25:38.852932 containerd[1549]: time="2026-03-02T14:25:38.851423422Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 2 14:25:38.887557 containerd[1549]: time="2026-03-02T14:25:38.887504790Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.655µs" Mar 2 14:25:38.887876 containerd[1549]: time="2026-03-02T14:25:38.887845687Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 2 14:25:38.887959 containerd[1549]: time="2026-03-02T14:25:38.887941425Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 2 14:25:38.888267 containerd[1549]: time="2026-03-02T14:25:38.888244882Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 2 14:25:38.888346 containerd[1549]: time="2026-03-02T14:25:38.888329991Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 2 14:25:38.888435 containerd[1549]: time="2026-03-02T14:25:38.888417925Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 2 14:25:38.888620 containerd[1549]: time="2026-03-02T14:25:38.888596609Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 2 14:25:38.888853 containerd[1549]: time="2026-03-02T14:25:38.888832308Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 2 14:25:38.889386 containerd[1549]: time="2026-03-02T14:25:38.889360494Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 2 14:25:38.889461 containerd[1549]: time="2026-03-02T14:25:38.889443480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 2 14:25:38.889522 containerd[1549]: time="2026-03-02T14:25:38.889507109Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 2 14:25:38.889586 containerd[1549]: time="2026-03-02T14:25:38.889571418Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 2 14:25:38.889935 containerd[1549]: time="2026-03-02T14:25:38.889910792Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 2 14:25:38.890277 containerd[1549]: time="2026-03-02T14:25:38.890256307Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 2 14:25:38.890367 containerd[1549]: time="2026-03-02T14:25:38.890349341Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 2 14:25:38.890420 containerd[1549]: time="2026-03-02T14:25:38.890407449Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 2 14:25:38.890871 containerd[1549]: time="2026-03-02T14:25:38.890842442Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 2 14:25:38.892349 containerd[1549]: time="2026-03-02T14:25:38.892324048Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 2 14:25:38.892521 containerd[1549]: time="2026-03-02T14:25:38.892501238Z" level=info msg="metadata content store policy set" policy=shared Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916319325Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916413441Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916435171Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916450830Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916466330Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916479694Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916495174Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916517244Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916531401Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916549445Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916564893Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916580463Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916934894Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 2 14:25:38.917878 containerd[1549]: time="2026-03-02T14:25:38.916971182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 2 14:25:38.918259 containerd[1549]: time="2026-03-02T14:25:38.916989977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 2 14:25:38.918259 containerd[1549]: time="2026-03-02T14:25:38.917004714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 2 14:25:38.918259 containerd[1549]: time="2026-03-02T14:25:38.917017749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 2 14:25:38.918259 containerd[1549]: time="2026-03-02T14:25:38.917031254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 2 14:25:38.918259 containerd[1549]: time="2026-03-02T14:25:38.917047364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 2 14:25:38.918259 containerd[1549]: time="2026-03-02T14:25:38.917060018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 2 14:25:38.918259 containerd[1549]: time="2026-03-02T14:25:38.917073453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 2 14:25:38.918259 containerd[1549]: time="2026-03-02T14:25:38.917087900Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 2 14:25:38.918259 containerd[1549]: time="2026-03-02T14:25:38.917100373Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 2 14:25:38.918259 containerd[1549]: time="2026-03-02T14:25:38.917213304Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 2 14:25:38.918259 containerd[1549]: time="2026-03-02T14:25:38.917233812Z" level=info msg="Start snapshots syncer" Mar 2 14:25:38.918259 containerd[1549]: time="2026-03-02T14:25:38.917265051Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 2 14:25:38.920234 containerd[1549]: time="2026-03-02T14:25:38.920186975Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 2 14:25:38.920606 containerd[1549]: time="2026-03-02T14:25:38.920583586Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 2 14:25:38.927338 containerd[1549]: time="2026-03-02T14:25:38.927303807Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 2 14:25:38.927628 containerd[1549]: time="2026-03-02T14:25:38.927602976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 2 14:25:38.928288 containerd[1549]: time="2026-03-02T14:25:38.928266534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 2 14:25:38.928361 containerd[1549]: time="2026-03-02T14:25:38.928343698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 2 14:25:38.928435 containerd[1549]: time="2026-03-02T14:25:38.928416354Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 2 14:25:38.928510 containerd[1549]: time="2026-03-02T14:25:38.928490502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 2 14:25:38.928609 containerd[1549]: time="2026-03-02T14:25:38.928587964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 2 14:25:38.929029 containerd[1549]: time="2026-03-02T14:25:38.928825828Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 2 14:25:38.929148 containerd[1549]: time="2026-03-02T14:25:38.929127632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 2 14:25:38.929411 containerd[1549]: time="2026-03-02T14:25:38.929383960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 2 14:25:38.929661 containerd[1549]: time="2026-03-02T14:25:38.929641130Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 2 14:25:38.930422 containerd[1549]: time="2026-03-02T14:25:38.930100038Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 2 14:25:38.930422 containerd[1549]: time="2026-03-02T14:25:38.930146855Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 2 14:25:38.930422 containerd[1549]: time="2026-03-02T14:25:38.930164738Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 2 14:25:38.930422 containerd[1549]: time="2026-03-02T14:25:38.930180688Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 2 14:25:38.930422 containerd[1549]: time="2026-03-02T14:25:38.930196778Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 2 14:25:38.930422 containerd[1549]: time="2026-03-02T14:25:38.930211966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 2 14:25:38.930422 containerd[1549]: time="2026-03-02T14:25:38.930250919Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 2 14:25:38.930422 containerd[1549]: time="2026-03-02T14:25:38.930279542Z" level=info msg="runtime interface created" Mar 2 14:25:38.930422 containerd[1549]: time="2026-03-02T14:25:38.930289461Z" level=info msg="created NRI interface" Mar 2 14:25:38.930422 containerd[1549]: time="2026-03-02T14:25:38.930304048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 2 14:25:38.930422 containerd[1549]: time="2026-03-02T14:25:38.930329386Z" level=info msg="Connect containerd service" Mar 2 14:25:38.930422 containerd[1549]: time="2026-03-02T14:25:38.930371785Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 14:25:38.932862 sshd_keygen[1568]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 14:25:38.933871 containerd[1549]: time="2026-03-02T14:25:38.933840610Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 14:25:39.018881 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 14:25:39.029908 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 14:25:39.042901 tar[1538]: linux-amd64/README.md Mar 2 14:25:39.077886 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 14:25:39.078276 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 14:25:39.090647 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 14:25:39.104253 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 14:25:39.143243 containerd[1549]: time="2026-03-02T14:25:39.143144673Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 14:25:39.143243 containerd[1549]: time="2026-03-02T14:25:39.143234752Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 14:25:39.143367 containerd[1549]: time="2026-03-02T14:25:39.143265860Z" level=info msg="Start subscribing containerd event" Mar 2 14:25:39.143367 containerd[1549]: time="2026-03-02T14:25:39.143302869Z" level=info msg="Start recovering state" Mar 2 14:25:39.147840 containerd[1549]: time="2026-03-02T14:25:39.146963093Z" level=info msg="Start event monitor" Mar 2 14:25:39.147840 containerd[1549]: time="2026-03-02T14:25:39.146992187Z" level=info msg="Start cni network conf syncer for default" Mar 2 14:25:39.147840 containerd[1549]: time="2026-03-02T14:25:39.147029296Z" level=info msg="Start streaming server" Mar 2 14:25:39.147840 containerd[1549]: time="2026-03-02T14:25:39.147050666Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 2 14:25:39.147840 containerd[1549]: time="2026-03-02T14:25:39.147061567Z" level=info msg="runtime interface starting up..." Mar 2 14:25:39.147840 containerd[1549]: time="2026-03-02T14:25:39.147070523Z" level=info msg="starting plugins..." Mar 2 14:25:39.147840 containerd[1549]: time="2026-03-02T14:25:39.147094428Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 2 14:25:39.147840 containerd[1549]: time="2026-03-02T14:25:39.147335137Z" level=info msg="containerd successfully booted in 0.304564s" Mar 2 14:25:39.147887 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 14:25:39.160165 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 14:25:39.177570 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 14:25:39.188245 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 14:25:39.200148 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 14:25:41.317184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:25:41.333083 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 14:25:41.343011 systemd[1]: Startup finished in 13.645s (kernel) + 31.887s (initrd) + 13.926s (userspace) = 59.459s. Mar 2 14:25:41.345504 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:25:43.118925 kubelet[1660]: E0302 14:25:43.118308 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:25:43.128204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:25:43.129348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:25:43.131531 systemd[1]: kubelet.service: Consumed 2.065s CPU time, 258.6M memory peak. Mar 2 14:25:46.769031 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 14:25:46.780279 systemd[1]: Started sshd@0-10.0.0.8:22-10.0.0.1:56190.service - OpenSSH per-connection server daemon (10.0.0.1:56190). Mar 2 14:25:47.111962 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 56190 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:25:47.116464 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:25:47.139624 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 14:25:47.144611 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 14:25:47.165517 systemd-logind[1534]: New session 1 of user core. Mar 2 14:25:47.210216 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 14:25:47.216342 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 14:25:47.239876 (systemd)[1679]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 14:25:47.248084 systemd-logind[1534]: New session c1 of user core. Mar 2 14:25:47.682112 systemd[1679]: Queued start job for default target default.target. Mar 2 14:25:47.710120 systemd[1679]: Created slice app.slice - User Application Slice. Mar 2 14:25:47.711044 systemd[1679]: Reached target paths.target - Paths. Mar 2 14:25:47.711166 systemd[1679]: Reached target timers.target - Timers. Mar 2 14:25:47.714150 systemd[1679]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 14:25:47.807160 systemd[1679]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 14:25:47.807362 systemd[1679]: Reached target sockets.target - Sockets. Mar 2 14:25:47.807428 systemd[1679]: Reached target basic.target - Basic System. Mar 2 14:25:47.807548 systemd[1679]: Reached target default.target - Main User Target. Mar 2 14:25:47.807610 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 14:25:47.807923 systemd[1679]: Startup finished in 542ms. Mar 2 14:25:47.822154 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 14:25:47.898966 systemd[1]: Started sshd@1-10.0.0.8:22-10.0.0.1:56194.service - OpenSSH per-connection server daemon (10.0.0.1:56194). Mar 2 14:25:48.044996 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 56194 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:25:48.050541 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:25:48.090463 systemd-logind[1534]: New session 2 of user core. Mar 2 14:25:48.102144 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 14:25:48.148166 sshd[1693]: Connection closed by 10.0.0.1 port 56194 Mar 2 14:25:48.157377 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Mar 2 14:25:48.181133 systemd[1]: Started sshd@2-10.0.0.8:22-10.0.0.1:56196.service - OpenSSH per-connection server daemon (10.0.0.1:56196). Mar 2 14:25:48.183359 systemd[1]: sshd@1-10.0.0.8:22-10.0.0.1:56194.service: Deactivated successfully. Mar 2 14:25:48.209480 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 14:25:48.212605 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. Mar 2 14:25:48.219421 systemd-logind[1534]: Removed session 2. Mar 2 14:25:48.287323 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 56196 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:25:48.292079 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:25:48.308344 systemd-logind[1534]: New session 3 of user core. Mar 2 14:25:48.319111 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 14:25:48.354154 sshd[1702]: Connection closed by 10.0.0.1 port 56196 Mar 2 14:25:48.354584 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Mar 2 14:25:48.376564 systemd[1]: sshd@2-10.0.0.8:22-10.0.0.1:56196.service: Deactivated successfully. Mar 2 14:25:48.382637 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 14:25:48.387426 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. Mar 2 14:25:48.395645 systemd[1]: Started sshd@3-10.0.0.8:22-10.0.0.1:56202.service - OpenSSH per-connection server daemon (10.0.0.1:56202). Mar 2 14:25:48.414023 systemd-logind[1534]: Removed session 3. Mar 2 14:25:48.651101 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 56202 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:25:48.673770 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:25:48.707375 systemd-logind[1534]: New session 4 of user core. Mar 2 14:25:48.720183 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 14:25:48.771574 sshd[1711]: Connection closed by 10.0.0.1 port 56202 Mar 2 14:25:48.776004 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Mar 2 14:25:48.813112 systemd[1]: sshd@3-10.0.0.8:22-10.0.0.1:56202.service: Deactivated successfully. Mar 2 14:25:48.816339 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 14:25:48.821062 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. Mar 2 14:25:48.826401 systemd[1]: Started sshd@4-10.0.0.8:22-10.0.0.1:56216.service - OpenSSH per-connection server daemon (10.0.0.1:56216). Mar 2 14:25:48.829348 systemd-logind[1534]: Removed session 4. Mar 2 14:25:48.955188 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 56216 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:25:48.958438 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:25:48.974496 systemd-logind[1534]: New session 5 of user core. Mar 2 14:25:49.001555 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 14:25:49.058249 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 14:25:49.059018 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 14:25:49.114356 sudo[1721]: pam_unix(sudo:session): session closed for user root Mar 2 14:25:49.150119 sshd[1720]: Connection closed by 10.0.0.1 port 56216 Mar 2 14:25:49.152137 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Mar 2 14:25:49.205558 systemd[1]: sshd@4-10.0.0.8:22-10.0.0.1:56216.service: Deactivated successfully. Mar 2 14:25:49.209634 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 14:25:49.218642 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. Mar 2 14:25:49.225898 systemd[1]: Started sshd@5-10.0.0.8:22-10.0.0.1:56222.service - OpenSSH per-connection server daemon (10.0.0.1:56222). Mar 2 14:25:49.242490 systemd-logind[1534]: Removed session 5. Mar 2 14:25:49.478593 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 56222 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:25:49.491442 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:25:49.515009 systemd-logind[1534]: New session 6 of user core. Mar 2 14:25:49.542287 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 14:25:49.618173 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 14:25:49.621272 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 14:25:49.645049 sudo[1732]: pam_unix(sudo:session): session closed for user root Mar 2 14:25:49.658575 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 2 14:25:49.659221 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 14:25:49.706375 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 14:25:49.898006 augenrules[1754]: No rules Mar 2 14:25:49.903449 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 14:25:49.904492 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 14:25:49.913476 sudo[1731]: pam_unix(sudo:session): session closed for user root Mar 2 14:25:49.921385 sshd[1730]: Connection closed by 10.0.0.1 port 56222 Mar 2 14:25:49.922645 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Mar 2 14:25:49.948612 systemd[1]: sshd@5-10.0.0.8:22-10.0.0.1:56222.service: Deactivated successfully. Mar 2 14:25:49.957011 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 14:25:49.964981 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. Mar 2 14:25:49.969354 systemd[1]: Started sshd@6-10.0.0.8:22-10.0.0.1:56232.service - OpenSSH per-connection server daemon (10.0.0.1:56232). Mar 2 14:25:49.973300 systemd-logind[1534]: Removed session 6. Mar 2 14:25:50.129741 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 56232 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:25:50.133559 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:25:50.159209 systemd-logind[1534]: New session 7 of user core. Mar 2 14:25:50.170262 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 14:25:50.227416 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 14:25:50.228437 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 14:25:53.230483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 14:25:53.239964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:25:53.306455 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 14:25:53.313528 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 14:25:53.722151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:25:53.747262 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:25:53.967197 kubelet[1802]: E0302 14:25:53.966504 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:25:53.985198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:25:53.985588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:25:53.986349 systemd[1]: kubelet.service: Consumed 446ms CPU time, 110.7M memory peak. Mar 2 14:25:54.076804 dockerd[1791]: time="2026-03-02T14:25:54.076435510Z" level=info msg="Starting up" Mar 2 14:25:54.082106 dockerd[1791]: time="2026-03-02T14:25:54.080660258Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 2 14:25:54.142183 dockerd[1791]: time="2026-03-02T14:25:54.142056473Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 2 14:25:54.473739 dockerd[1791]: time="2026-03-02T14:25:54.472154719Z" level=info msg="Loading containers: start." Mar 2 14:25:54.531630 kernel: Initializing XFRM netlink socket Mar 2 14:25:56.070325 systemd-networkd[1452]: docker0: Link UP Mar 2 14:25:56.100789 dockerd[1791]: time="2026-03-02T14:25:56.097559215Z" level=info msg="Loading containers: done." Mar 2 14:25:56.183877 dockerd[1791]: time="2026-03-02T14:25:56.183440663Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 14:25:56.183877 dockerd[1791]: time="2026-03-02T14:25:56.183599600Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 2 14:25:56.191136 dockerd[1791]: time="2026-03-02T14:25:56.185080063Z" level=info msg="Initializing buildkit" Mar 2 14:25:56.348155 dockerd[1791]: time="2026-03-02T14:25:56.347968666Z" level=info msg="Completed buildkit initialization" Mar 2 14:25:56.368897 dockerd[1791]: time="2026-03-02T14:25:56.368574270Z" level=info msg="Daemon has completed initialization" Mar 2 14:25:56.369162 dockerd[1791]: time="2026-03-02T14:25:56.369066133Z" level=info msg="API listen on /run/docker.sock" Mar 2 14:25:56.369979 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 14:26:00.456969 containerd[1549]: time="2026-03-02T14:26:00.456638519Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 2 14:26:01.840508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427375877.mount: Deactivated successfully. Mar 2 14:26:04.225364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 14:26:04.238007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:26:04.811288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:26:04.822369 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:26:04.958082 kubelet[2088]: E0302 14:26:04.957830 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:26:04.971259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:26:04.971586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:26:04.972491 systemd[1]: kubelet.service: Consumed 353ms CPU time, 110.7M memory peak. Mar 2 14:26:07.516808 containerd[1549]: time="2026-03-02T14:26:07.516524356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:07.519013 containerd[1549]: time="2026-03-02T14:26:07.518638100Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 2 14:26:07.527304 containerd[1549]: time="2026-03-02T14:26:07.526430252Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:07.533869 containerd[1549]: time="2026-03-02T14:26:07.533183377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:07.535082 containerd[1549]: time="2026-03-02T14:26:07.534488378Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 7.075348719s" Mar 2 14:26:07.535082 containerd[1549]: time="2026-03-02T14:26:07.534529374Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 2 14:26:07.539100 containerd[1549]: time="2026-03-02T14:26:07.539021647Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 2 14:26:11.316333 containerd[1549]: time="2026-03-02T14:26:11.316090831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:11.318840 containerd[1549]: time="2026-03-02T14:26:11.318803644Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 2 14:26:11.321890 containerd[1549]: time="2026-03-02T14:26:11.321404442Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:11.328008 containerd[1549]: time="2026-03-02T14:26:11.327815922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:11.330066 containerd[1549]: time="2026-03-02T14:26:11.329492911Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 3.790430952s" Mar 2 14:26:11.330868 containerd[1549]: time="2026-03-02T14:26:11.330840043Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 2 14:26:11.336494 containerd[1549]: time="2026-03-02T14:26:11.336150162Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 2 14:26:14.995140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 14:26:15.001011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:26:15.693974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:26:15.740600 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:26:16.859216 kubelet[2113]: E0302 14:26:16.858659 2113 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:26:16.868481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:26:16.870531 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:26:16.871374 systemd[1]: kubelet.service: Consumed 1.343s CPU time, 110.7M memory peak. Mar 2 14:26:17.873144 containerd[1549]: time="2026-03-02T14:26:17.869080860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:17.892791 containerd[1549]: time="2026-03-02T14:26:17.890391056Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 2 14:26:17.930605 containerd[1549]: time="2026-03-02T14:26:17.928818280Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:18.050247 containerd[1549]: time="2026-03-02T14:26:18.047652653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:18.062117 containerd[1549]: time="2026-03-02T14:26:18.061538634Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 6.725298533s" Mar 2 14:26:18.062117 containerd[1549]: time="2026-03-02T14:26:18.061661075Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 2 14:26:18.100913 containerd[1549]: time="2026-03-02T14:26:18.099429201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 2 14:26:21.704810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084525857.mount: Deactivated successfully. Mar 2 14:26:22.887182 update_engine[1535]: I20260302 14:26:22.886173 1535 update_attempter.cc:509] Updating boot flags... Mar 2 14:26:25.288820 containerd[1549]: time="2026-03-02T14:26:25.288452071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:25.302800 containerd[1549]: time="2026-03-02T14:26:25.299223485Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 2 14:26:25.323473 containerd[1549]: time="2026-03-02T14:26:25.323199257Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:25.325808 containerd[1549]: time="2026-03-02T14:26:25.325569106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:25.329506 containerd[1549]: time="2026-03-02T14:26:25.329273401Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 7.22974412s" Mar 2 14:26:25.329506 containerd[1549]: time="2026-03-02T14:26:25.329383924Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 2 14:26:25.348180 containerd[1549]: time="2026-03-02T14:26:25.346196813Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 2 14:26:26.476575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4068967469.mount: Deactivated successfully. Mar 2 14:26:26.984407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 2 14:26:27.007361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:26:27.997566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:26:28.133404 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:26:28.793924 kubelet[2164]: E0302 14:26:28.788660 2164 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:26:28.803043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:26:28.803422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:26:28.804214 systemd[1]: kubelet.service: Consumed 1.026s CPU time, 108.7M memory peak. Mar 2 14:26:34.636769 containerd[1549]: time="2026-03-02T14:26:34.635448215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:34.644233 containerd[1549]: time="2026-03-02T14:26:34.643495713Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 2 14:26:34.648505 containerd[1549]: time="2026-03-02T14:26:34.648213645Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:34.661388 containerd[1549]: time="2026-03-02T14:26:34.661308709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:34.666998 containerd[1549]: time="2026-03-02T14:26:34.666430346Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 9.320050357s" Mar 2 14:26:34.667573 containerd[1549]: time="2026-03-02T14:26:34.667176091Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 2 14:26:34.678130 containerd[1549]: time="2026-03-02T14:26:34.676454386Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 2 14:26:35.481045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657756050.mount: Deactivated successfully. Mar 2 14:26:35.518443 containerd[1549]: time="2026-03-02T14:26:35.517459971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:35.522759 containerd[1549]: time="2026-03-02T14:26:35.522317898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 2 14:26:35.526564 containerd[1549]: time="2026-03-02T14:26:35.526498165Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:35.539953 containerd[1549]: time="2026-03-02T14:26:35.539854248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:35.541472 containerd[1549]: time="2026-03-02T14:26:35.541317910Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 864.767308ms" Mar 2 14:26:35.541472 containerd[1549]: time="2026-03-02T14:26:35.541364116Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 2 14:26:35.544800 containerd[1549]: time="2026-03-02T14:26:35.544442919Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 2 14:26:36.485241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784641797.mount: Deactivated successfully. Mar 2 14:26:38.976098 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 2 14:26:38.984244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:26:39.486851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:26:39.501396 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:26:39.742136 kubelet[2276]: E0302 14:26:39.741856 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:26:39.750873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:26:39.751175 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:26:39.751947 systemd[1]: kubelet.service: Consumed 462ms CPU time, 110.3M memory peak. Mar 2 14:26:42.361796 containerd[1549]: time="2026-03-02T14:26:42.359854161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:42.377224 containerd[1549]: time="2026-03-02T14:26:42.375308011Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 2 14:26:42.389222 containerd[1549]: time="2026-03-02T14:26:42.389125764Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:42.409644 containerd[1549]: time="2026-03-02T14:26:42.409597915Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 6.86495737s" Mar 2 14:26:42.415063 containerd[1549]: time="2026-03-02T14:26:42.409934829Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 2 14:26:42.415063 containerd[1549]: time="2026-03-02T14:26:42.408511106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:48.130570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:26:48.130993 systemd[1]: kubelet.service: Consumed 462ms CPU time, 110.3M memory peak. Mar 2 14:26:48.135565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:26:48.212875 systemd[1]: Reload requested from client PID 2329 ('systemctl') (unit session-7.scope)... Mar 2 14:26:48.213158 systemd[1]: Reloading... Mar 2 14:26:48.398817 zram_generator::config[2369]: No configuration found. Mar 2 14:26:49.096224 systemd[1]: Reloading finished in 881 ms. Mar 2 14:26:49.235354 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 14:26:49.235665 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 14:26:49.238132 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:26:49.238195 systemd[1]: kubelet.service: Consumed 239ms CPU time, 98.3M memory peak. Mar 2 14:26:49.248976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:26:49.770809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:26:49.792882 (kubelet)[2420]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 14:26:50.021578 kubelet[2420]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 14:26:50.021578 kubelet[2420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 14:26:50.021578 kubelet[2420]: I0302 14:26:50.021378 2420 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 14:26:51.158011 kubelet[2420]: I0302 14:26:51.157143 2420 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 2 14:26:51.158011 kubelet[2420]: I0302 14:26:51.157241 2420 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 14:26:51.160869 kubelet[2420]: I0302 14:26:51.159900 2420 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 14:26:51.160869 kubelet[2420]: I0302 14:26:51.159990 2420 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 14:26:51.160869 kubelet[2420]: I0302 14:26:51.160275 2420 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 14:26:51.330915 kubelet[2420]: I0302 14:26:51.330147 2420 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 14:26:51.330915 kubelet[2420]: E0302 14:26:51.330194 2420 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 14:26:51.363846 kubelet[2420]: I0302 14:26:51.362100 2420 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 2 14:26:51.381170 kubelet[2420]: I0302 14:26:51.381146 2420 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 14:26:51.383857 kubelet[2420]: I0302 14:26:51.383473 2420 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 14:26:51.384900 kubelet[2420]: I0302 14:26:51.383654 2420 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 14:26:51.384900 kubelet[2420]: I0302 14:26:51.384145 2420 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 14:26:51.384900 kubelet[2420]: I0302 14:26:51.384159 2420 container_manager_linux.go:306] "Creating device plugin manager" Mar 2 14:26:51.384900 kubelet[2420]: I0302 14:26:51.384281 2420 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 14:26:51.389834 kubelet[2420]: I0302 14:26:51.388989 2420 state_mem.go:36] "Initialized new in-memory state store" Mar 2 14:26:51.391092 kubelet[2420]: I0302 14:26:51.390360 2420 kubelet.go:475] "Attempting to sync node with API server" Mar 2 14:26:51.391092 kubelet[2420]: I0302 14:26:51.390445 2420 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 14:26:51.391847 kubelet[2420]: I0302 14:26:51.391276 2420 kubelet.go:387] "Adding apiserver pod source" Mar 2 14:26:51.391847 kubelet[2420]: I0302 14:26:51.391305 2420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 14:26:51.395351 kubelet[2420]: E0302 14:26:51.395319 2420 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 14:26:51.396397 kubelet[2420]: E0302 14:26:51.395920 2420 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 14:26:51.399224 kubelet[2420]: I0302 14:26:51.399201 2420 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 2 14:26:51.400383 kubelet[2420]: I0302 14:26:51.400233 2420 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 14:26:51.400441 kubelet[2420]: I0302 14:26:51.400402 2420 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 14:26:51.402791 kubelet[2420]: W0302 14:26:51.401657 2420 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 14:26:51.419602 kubelet[2420]: I0302 14:26:51.419374 2420 server.go:1262] "Started kubelet" Mar 2 14:26:51.421206 kubelet[2420]: I0302 14:26:51.421042 2420 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 14:26:51.422833 kubelet[2420]: I0302 14:26:51.422661 2420 server.go:310] "Adding debug handlers to kubelet server" Mar 2 14:26:51.425617 kubelet[2420]: I0302 14:26:51.425309 2420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 14:26:51.438108 kubelet[2420]: I0302 14:26:51.438039 2420 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 14:26:51.438272 kubelet[2420]: I0302 14:26:51.438250 2420 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 14:26:51.438848 kubelet[2420]: I0302 14:26:51.438831 2420 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 14:26:51.440451 kubelet[2420]: E0302 14:26:51.439134 2420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.8:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.8:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18990c71d6a85c75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 14:26:51.419196533 +0000 UTC m=+1.609917306,LastTimestamp:2026-03-02 14:26:51.419196533 +0000 UTC m=+1.609917306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 14:26:51.461863 kubelet[2420]: I0302 14:26:51.442101 2420 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 2 14:26:51.464179 kubelet[2420]: I0302 14:26:51.442248 2420 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 14:26:51.464262 kubelet[2420]: E0302 14:26:51.442586 2420 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 14:26:51.464354 kubelet[2420]: E0302 14:26:51.443268 2420 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 14:26:51.464427 kubelet[2420]: E0302 14:26:51.443413 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="200ms" Mar 2 14:26:51.469996 kubelet[2420]: I0302 14:26:51.444017 2420 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 14:26:51.469996 kubelet[2420]: I0302 14:26:51.466330 2420 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 14:26:51.470576 kubelet[2420]: I0302 14:26:51.470474 2420 reconciler.go:29] "Reconciler: start to sync state" Mar 2 14:26:51.471265 kubelet[2420]: E0302 14:26:51.471242 2420 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 14:26:51.479409 kubelet[2420]: I0302 14:26:51.479389 2420 factory.go:223] Registration of the containerd container factory successfully Mar 2 14:26:51.479602 kubelet[2420]: I0302 14:26:51.479585 2420 factory.go:223] Registration of the systemd container factory successfully Mar 2 14:26:51.545317 kubelet[2420]: I0302 14:26:51.543420 2420 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 14:26:51.545317 kubelet[2420]: I0302 14:26:51.543439 2420 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 14:26:51.545317 kubelet[2420]: I0302 14:26:51.543457 2420 state_mem.go:36] "Initialized new in-memory state store" Mar 2 14:26:51.553341 kubelet[2420]: I0302 14:26:51.552283 2420 policy_none.go:49] "None policy: Start" Mar 2 14:26:51.553341 kubelet[2420]: I0302 14:26:51.552466 2420 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 14:26:51.553341 kubelet[2420]: I0302 14:26:51.552568 2420 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 14:26:51.562420 kubelet[2420]: I0302 14:26:51.562125 2420 policy_none.go:47] "Start" Mar 2 14:26:51.568174 kubelet[2420]: E0302 14:26:51.566917 2420 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 14:26:51.577234 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 14:26:51.586147 kubelet[2420]: I0302 14:26:51.586024 2420 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 14:26:51.599605 kubelet[2420]: I0302 14:26:51.598866 2420 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 14:26:51.599605 kubelet[2420]: I0302 14:26:51.598888 2420 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 2 14:26:51.599605 kubelet[2420]: I0302 14:26:51.598989 2420 kubelet.go:2428] "Starting kubelet main sync loop" Mar 2 14:26:51.599605 kubelet[2420]: E0302 14:26:51.599038 2420 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 14:26:51.599282 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 14:26:51.602064 kubelet[2420]: E0302 14:26:51.601573 2420 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 14:26:51.610378 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 14:26:51.623105 kubelet[2420]: E0302 14:26:51.622987 2420 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 14:26:51.624287 kubelet[2420]: I0302 14:26:51.624262 2420 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 14:26:51.624557 kubelet[2420]: I0302 14:26:51.624441 2420 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 14:26:51.626002 kubelet[2420]: I0302 14:26:51.625985 2420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 14:26:51.629990 kubelet[2420]: E0302 14:26:51.629971 2420 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 14:26:51.630102 kubelet[2420]: E0302 14:26:51.630086 2420 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 14:26:51.673805 kubelet[2420]: E0302 14:26:51.672326 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="400ms" Mar 2 14:26:51.735155 kubelet[2420]: I0302 14:26:51.735020 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 14:26:51.741979 kubelet[2420]: E0302 14:26:51.741937 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Mar 2 14:26:51.770958 systemd[1]: Created slice kubepods-burstable-pod4c519ff95342b92df442df672de5fafe.slice - libcontainer container kubepods-burstable-pod4c519ff95342b92df442df672de5fafe.slice. Mar 2 14:26:51.782899 kubelet[2420]: I0302 14:26:51.781377 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c519ff95342b92df442df672de5fafe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c519ff95342b92df442df672de5fafe\") " pod="kube-system/kube-apiserver-localhost" Mar 2 14:26:51.782899 kubelet[2420]: I0302 14:26:51.782415 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c519ff95342b92df442df672de5fafe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c519ff95342b92df442df672de5fafe\") " pod="kube-system/kube-apiserver-localhost" Mar 2 14:26:51.783448 kubelet[2420]: I0302 14:26:51.783417 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c519ff95342b92df442df672de5fafe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4c519ff95342b92df442df672de5fafe\") " pod="kube-system/kube-apiserver-localhost" Mar 2 14:26:51.783581 kubelet[2420]: I0302 14:26:51.783453 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:26:51.783581 kubelet[2420]: I0302 14:26:51.783568 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:26:51.783662 kubelet[2420]: I0302 14:26:51.783604 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:26:51.783662 kubelet[2420]: I0302 14:26:51.783627 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 2 14:26:51.783662 kubelet[2420]: I0302 14:26:51.783648 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:26:51.783662 kubelet[2420]: I0302 14:26:51.783842 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:26:51.803976 kubelet[2420]: E0302 14:26:51.800289 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:51.825461 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 2 14:26:51.858927 kubelet[2420]: E0302 14:26:51.855085 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:51.876367 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 2 14:26:51.892346 kubelet[2420]: E0302 14:26:51.890277 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:51.909808 kubelet[2420]: E0302 14:26:51.909388 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:51.918133 containerd[1549]: time="2026-03-02T14:26:51.915630347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 2 14:26:51.953228 kubelet[2420]: I0302 14:26:51.949976 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 14:26:51.953228 kubelet[2420]: E0302 14:26:51.952169 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Mar 2 14:26:52.076995 kubelet[2420]: E0302 14:26:52.075246 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="800ms" Mar 2 14:26:52.118821 kubelet[2420]: E0302 14:26:52.118141 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:52.126825 containerd[1549]: time="2026-03-02T14:26:52.126044594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4c519ff95342b92df442df672de5fafe,Namespace:kube-system,Attempt:0,}" Mar 2 14:26:52.169283 kubelet[2420]: E0302 14:26:52.169024 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:52.172192 containerd[1549]: time="2026-03-02T14:26:52.170203214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 2 14:26:52.307838 kubelet[2420]: E0302 14:26:52.307416 2420 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 14:26:52.362121 kubelet[2420]: I0302 14:26:52.361866 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 14:26:52.362834 kubelet[2420]: E0302 14:26:52.362322 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Mar 2 14:26:52.648858 kubelet[2420]: E0302 14:26:52.648288 2420 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 14:26:52.753933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1615079380.mount: Deactivated successfully. Mar 2 14:26:52.782569 containerd[1549]: time="2026-03-02T14:26:52.780902455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 14:26:52.794926 containerd[1549]: time="2026-03-02T14:26:52.793583496Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 2 14:26:52.808757 containerd[1549]: time="2026-03-02T14:26:52.804629895Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 14:26:52.809362 containerd[1549]: time="2026-03-02T14:26:52.809322819Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 14:26:52.811311 containerd[1549]: time="2026-03-02T14:26:52.810952090Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 2 14:26:52.833440 kubelet[2420]: E0302 14:26:52.831885 2420 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 14:26:52.838200 containerd[1549]: time="2026-03-02T14:26:52.838152959Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 14:26:52.849353 containerd[1549]: time="2026-03-02T14:26:52.849203347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 14:26:52.850911 containerd[1549]: time="2026-03-02T14:26:52.850231855Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 921.575623ms" Mar 2 14:26:52.853177 containerd[1549]: time="2026-03-02T14:26:52.853017286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 2 14:26:52.856163 containerd[1549]: time="2026-03-02T14:26:52.855289920Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 720.100148ms" Mar 2 14:26:52.861015 containerd[1549]: time="2026-03-02T14:26:52.860191657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 679.195329ms" Mar 2 14:26:52.879931 kubelet[2420]: E0302 14:26:52.879312 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="1.6s" Mar 2 14:26:53.009324 containerd[1549]: time="2026-03-02T14:26:53.008950698Z" level=info msg="connecting to shim 875ae3fbbaa6bf39d6af4a159003e6a8a357170ae703949e79625cd3b48c335e" address="unix:///run/containerd/s/b6a97a71250d035a5824a6cdad9a1cf7632af634bfe59646c2e8f5fbf6cbad4b" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:26:53.039588 containerd[1549]: time="2026-03-02T14:26:53.037448907Z" level=info msg="connecting to shim f06fb0b3735699fde33090de38e9cf4922f4c148ee68a9e55ca9baf457127186" address="unix:///run/containerd/s/c6c49a0c5210649399869e5089bb124c8069700950074cbe1f935de77e21b30e" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:26:53.040250 containerd[1549]: time="2026-03-02T14:26:53.038936184Z" level=info msg="connecting to shim 3c25c8ee91d590bfd15caa13e7e1c5722a1672716624b033c63b86e7e1b36c11" address="unix:///run/containerd/s/6bd0c82e16e8edaaaf2fd3c8f9cfab4ff7ba40b29d89d2ae4ada4c0f6d663270" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:26:53.125375 systemd[1]: Started cri-containerd-875ae3fbbaa6bf39d6af4a159003e6a8a357170ae703949e79625cd3b48c335e.scope - libcontainer container 875ae3fbbaa6bf39d6af4a159003e6a8a357170ae703949e79625cd3b48c335e. Mar 2 14:26:53.142652 kubelet[2420]: E0302 14:26:53.142112 2420 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 14:26:53.143098 systemd[1]: Started cri-containerd-3c25c8ee91d590bfd15caa13e7e1c5722a1672716624b033c63b86e7e1b36c11.scope - libcontainer container 3c25c8ee91d590bfd15caa13e7e1c5722a1672716624b033c63b86e7e1b36c11. Mar 2 14:26:53.161162 systemd[1]: Started cri-containerd-f06fb0b3735699fde33090de38e9cf4922f4c148ee68a9e55ca9baf457127186.scope - libcontainer container f06fb0b3735699fde33090de38e9cf4922f4c148ee68a9e55ca9baf457127186. Mar 2 14:26:53.176854 kubelet[2420]: I0302 14:26:53.176287 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 14:26:53.182896 kubelet[2420]: E0302 14:26:53.179218 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Mar 2 14:26:53.360976 containerd[1549]: time="2026-03-02T14:26:53.360028772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"875ae3fbbaa6bf39d6af4a159003e6a8a357170ae703949e79625cd3b48c335e\"" Mar 2 14:26:53.361984 kubelet[2420]: E0302 14:26:53.361469 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:53.380210 containerd[1549]: time="2026-03-02T14:26:53.379376126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c25c8ee91d590bfd15caa13e7e1c5722a1672716624b033c63b86e7e1b36c11\"" Mar 2 14:26:53.387103 kubelet[2420]: E0302 14:26:53.387071 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:53.391619 containerd[1549]: time="2026-03-02T14:26:53.391582888Z" level=info msg="CreateContainer within sandbox \"875ae3fbbaa6bf39d6af4a159003e6a8a357170ae703949e79625cd3b48c335e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 14:26:53.425118 containerd[1549]: time="2026-03-02T14:26:53.424033089Z" level=info msg="CreateContainer within sandbox \"3c25c8ee91d590bfd15caa13e7e1c5722a1672716624b033c63b86e7e1b36c11\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 14:26:53.440930 containerd[1549]: time="2026-03-02T14:26:53.440826854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4c519ff95342b92df442df672de5fafe,Namespace:kube-system,Attempt:0,} returns sandbox id \"f06fb0b3735699fde33090de38e9cf4922f4c148ee68a9e55ca9baf457127186\"" Mar 2 14:26:53.441635 kubelet[2420]: E0302 14:26:53.441382 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:53.455168 kubelet[2420]: E0302 14:26:53.454972 2420 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 14:26:53.464209 containerd[1549]: time="2026-03-02T14:26:53.462083928Z" level=info msg="CreateContainer within sandbox \"f06fb0b3735699fde33090de38e9cf4922f4c148ee68a9e55ca9baf457127186\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 14:26:53.466464 containerd[1549]: time="2026-03-02T14:26:53.466346029Z" level=info msg="Container 63784a8a9e1fecbd2662fc4635999079fb5a4406d7f5aa6a67a056a9812dbf5b: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:26:53.515309 containerd[1549]: time="2026-03-02T14:26:53.515169596Z" level=info msg="Container ff680a6dbe090d1a9c11cdd54681ab4307cfb49777c85c98c81b82e8bcca373e: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:26:53.528636 containerd[1549]: time="2026-03-02T14:26:53.527231080Z" level=info msg="CreateContainer within sandbox \"875ae3fbbaa6bf39d6af4a159003e6a8a357170ae703949e79625cd3b48c335e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"63784a8a9e1fecbd2662fc4635999079fb5a4406d7f5aa6a67a056a9812dbf5b\"" Mar 2 14:26:53.530882 containerd[1549]: time="2026-03-02T14:26:53.530157014Z" level=info msg="StartContainer for \"63784a8a9e1fecbd2662fc4635999079fb5a4406d7f5aa6a67a056a9812dbf5b\"" Mar 2 14:26:53.532991 containerd[1549]: time="2026-03-02T14:26:53.532863538Z" level=info msg="connecting to shim 63784a8a9e1fecbd2662fc4635999079fb5a4406d7f5aa6a67a056a9812dbf5b" address="unix:///run/containerd/s/b6a97a71250d035a5824a6cdad9a1cf7632af634bfe59646c2e8f5fbf6cbad4b" protocol=ttrpc version=3 Mar 2 14:26:53.533433 containerd[1549]: time="2026-03-02T14:26:53.533180057Z" level=info msg="Container fe7212f5f1698eca0591eb1675bb31c7ee86d6d84c87475156125a526b3d2d84: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:26:53.554255 containerd[1549]: time="2026-03-02T14:26:53.554194716Z" level=info msg="CreateContainer within sandbox \"3c25c8ee91d590bfd15caa13e7e1c5722a1672716624b033c63b86e7e1b36c11\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ff680a6dbe090d1a9c11cdd54681ab4307cfb49777c85c98c81b82e8bcca373e\"" Mar 2 14:26:53.562133 containerd[1549]: time="2026-03-02T14:26:53.562101933Z" level=info msg="StartContainer for \"ff680a6dbe090d1a9c11cdd54681ab4307cfb49777c85c98c81b82e8bcca373e\"" Mar 2 14:26:53.565171 containerd[1549]: time="2026-03-02T14:26:53.565096143Z" level=info msg="connecting to shim ff680a6dbe090d1a9c11cdd54681ab4307cfb49777c85c98c81b82e8bcca373e" address="unix:///run/containerd/s/6bd0c82e16e8edaaaf2fd3c8f9cfab4ff7ba40b29d89d2ae4ada4c0f6d663270" protocol=ttrpc version=3 Mar 2 14:26:53.577003 containerd[1549]: time="2026-03-02T14:26:53.576958787Z" level=info msg="CreateContainer within sandbox \"f06fb0b3735699fde33090de38e9cf4922f4c148ee68a9e55ca9baf457127186\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fe7212f5f1698eca0591eb1675bb31c7ee86d6d84c87475156125a526b3d2d84\"" Mar 2 14:26:53.585218 containerd[1549]: time="2026-03-02T14:26:53.585160886Z" level=info msg="StartContainer for \"fe7212f5f1698eca0591eb1675bb31c7ee86d6d84c87475156125a526b3d2d84\"" Mar 2 14:26:53.629297 containerd[1549]: time="2026-03-02T14:26:53.627626087Z" level=info msg="connecting to shim fe7212f5f1698eca0591eb1675bb31c7ee86d6d84c87475156125a526b3d2d84" address="unix:///run/containerd/s/c6c49a0c5210649399869e5089bb124c8069700950074cbe1f935de77e21b30e" protocol=ttrpc version=3 Mar 2 14:26:53.664047 systemd[1]: Started cri-containerd-63784a8a9e1fecbd2662fc4635999079fb5a4406d7f5aa6a67a056a9812dbf5b.scope - libcontainer container 63784a8a9e1fecbd2662fc4635999079fb5a4406d7f5aa6a67a056a9812dbf5b. Mar 2 14:26:53.705957 systemd[1]: Started cri-containerd-ff680a6dbe090d1a9c11cdd54681ab4307cfb49777c85c98c81b82e8bcca373e.scope - libcontainer container ff680a6dbe090d1a9c11cdd54681ab4307cfb49777c85c98c81b82e8bcca373e. Mar 2 14:26:53.813177 systemd[1]: Started cri-containerd-fe7212f5f1698eca0591eb1675bb31c7ee86d6d84c87475156125a526b3d2d84.scope - libcontainer container fe7212f5f1698eca0591eb1675bb31c7ee86d6d84c87475156125a526b3d2d84. Mar 2 14:26:54.084845 containerd[1549]: time="2026-03-02T14:26:54.083002530Z" level=info msg="StartContainer for \"63784a8a9e1fecbd2662fc4635999079fb5a4406d7f5aa6a67a056a9812dbf5b\" returns successfully" Mar 2 14:26:54.143840 containerd[1549]: time="2026-03-02T14:26:54.143061395Z" level=info msg="StartContainer for \"ff680a6dbe090d1a9c11cdd54681ab4307cfb49777c85c98c81b82e8bcca373e\" returns successfully" Mar 2 14:26:54.188125 containerd[1549]: time="2026-03-02T14:26:54.187991494Z" level=info msg="StartContainer for \"fe7212f5f1698eca0591eb1675bb31c7ee86d6d84c87475156125a526b3d2d84\" returns successfully" Mar 2 14:26:54.748566 kubelet[2420]: E0302 14:26:54.748216 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:54.748566 kubelet[2420]: E0302 14:26:54.748358 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:54.775630 kubelet[2420]: E0302 14:26:54.774091 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:54.775630 kubelet[2420]: E0302 14:26:54.774231 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:54.793206 kubelet[2420]: I0302 14:26:54.793088 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 14:26:54.794776 kubelet[2420]: E0302 14:26:54.794462 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:54.795017 kubelet[2420]: E0302 14:26:54.794913 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:55.789167 kubelet[2420]: E0302 14:26:55.787936 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:55.789167 kubelet[2420]: E0302 14:26:55.788376 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:55.789167 kubelet[2420]: E0302 14:26:55.788610 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:55.789167 kubelet[2420]: E0302 14:26:55.788845 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:55.795577 kubelet[2420]: E0302 14:26:55.794373 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:55.795577 kubelet[2420]: E0302 14:26:55.794644 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:56.793809 kubelet[2420]: E0302 14:26:56.793144 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:56.793809 kubelet[2420]: E0302 14:26:56.793289 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:56.796914 kubelet[2420]: E0302 14:26:56.796239 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:56.796914 kubelet[2420]: E0302 14:26:56.796630 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:57.816465 kubelet[2420]: E0302 14:26:57.816219 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:57.816465 kubelet[2420]: E0302 14:26:57.816378 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:58.940093 kubelet[2420]: E0302 14:26:58.940037 2420 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 14:26:59.052443 kubelet[2420]: E0302 14:26:59.052412 2420 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:26:59.053398 kubelet[2420]: E0302 14:26:59.053324 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:26:59.091810 kubelet[2420]: I0302 14:26:59.091205 2420 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 14:26:59.143445 kubelet[2420]: I0302 14:26:59.143414 2420 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 14:26:59.198106 kubelet[2420]: E0302 14:26:59.197018 2420 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 2 14:26:59.205033 kubelet[2420]: I0302 14:26:59.198561 2420 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 14:26:59.205033 kubelet[2420]: E0302 14:26:59.204934 2420 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 2 14:26:59.205033 kubelet[2420]: I0302 14:26:59.204957 2420 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 14:26:59.214955 kubelet[2420]: E0302 14:26:59.212061 2420 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 14:26:59.402254 kubelet[2420]: I0302 14:26:59.399437 2420 apiserver.go:52] "Watching apiserver" Mar 2 14:26:59.465454 kubelet[2420]: I0302 14:26:59.464908 2420 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 14:27:04.822922 kubelet[2420]: I0302 14:27:04.820346 2420 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 14:27:04.896109 kubelet[2420]: E0302 14:27:04.894384 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:05.028993 kubelet[2420]: E0302 14:27:05.026361 2420 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:05.908309 systemd[1]: Reload requested from client PID 2715 ('systemctl') (unit session-7.scope)... Mar 2 14:27:05.908324 systemd[1]: Reloading... Mar 2 14:27:06.185237 zram_generator::config[2764]: No configuration found. Mar 2 14:27:06.644918 systemd[1]: Reloading finished in 735 ms. Mar 2 14:27:06.761907 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:27:06.793420 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 14:27:06.793981 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:27:06.794041 systemd[1]: kubelet.service: Consumed 3.032s CPU time, 130.4M memory peak. Mar 2 14:27:06.804131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:27:07.321133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:27:07.356356 (kubelet)[2802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 14:27:07.742908 kubelet[2802]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 14:27:07.742908 kubelet[2802]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 14:27:07.742908 kubelet[2802]: I0302 14:27:07.740290 2802 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 14:27:07.791053 kubelet[2802]: I0302 14:27:07.789130 2802 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 2 14:27:07.791053 kubelet[2802]: I0302 14:27:07.789227 2802 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 14:27:07.791053 kubelet[2802]: I0302 14:27:07.789266 2802 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 14:27:07.791053 kubelet[2802]: I0302 14:27:07.789275 2802 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 14:27:07.795324 kubelet[2802]: I0302 14:27:07.794155 2802 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 14:27:07.804939 kubelet[2802]: I0302 14:27:07.803957 2802 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 14:27:07.819910 kubelet[2802]: I0302 14:27:07.819554 2802 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 14:27:07.861245 kubelet[2802]: I0302 14:27:07.859640 2802 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 2 14:27:07.882105 kubelet[2802]: I0302 14:27:07.882073 2802 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 14:27:07.883805 kubelet[2802]: I0302 14:27:07.882529 2802 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 14:27:07.883805 kubelet[2802]: I0302 14:27:07.882558 2802 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 14:27:07.883805 kubelet[2802]: I0302 14:27:07.882905 2802 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 14:27:07.883805 kubelet[2802]: I0302 14:27:07.882917 2802 container_manager_linux.go:306] "Creating device plugin manager" Mar 2 14:27:07.884243 kubelet[2802]: I0302 14:27:07.882944 2802 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 14:27:07.884243 kubelet[2802]: I0302 14:27:07.883158 2802 state_mem.go:36] "Initialized new in-memory state store" Mar 2 14:27:07.884243 kubelet[2802]: I0302 14:27:07.883323 2802 kubelet.go:475] "Attempting to sync node with API server" Mar 2 14:27:07.884243 kubelet[2802]: I0302 14:27:07.883337 2802 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 14:27:07.884243 kubelet[2802]: I0302 14:27:07.883370 2802 kubelet.go:387] "Adding apiserver pod source" Mar 2 14:27:07.884243 kubelet[2802]: I0302 14:27:07.883387 2802 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 14:27:07.893967 kubelet[2802]: I0302 14:27:07.890655 2802 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 2 14:27:07.913338 kubelet[2802]: I0302 14:27:07.912986 2802 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 14:27:07.913338 kubelet[2802]: I0302 14:27:07.913069 2802 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 14:27:07.976941 kubelet[2802]: I0302 14:27:07.976628 2802 server.go:1262] "Started kubelet" Mar 2 14:27:07.984938 kubelet[2802]: I0302 14:27:07.981947 2802 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 14:27:07.984938 kubelet[2802]: I0302 14:27:07.982081 2802 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 14:27:07.984938 kubelet[2802]: I0302 14:27:07.982378 2802 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 14:27:07.992989 kubelet[2802]: I0302 14:27:07.992906 2802 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 14:27:08.045255 kubelet[2802]: I0302 14:27:08.044401 2802 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 14:27:08.047903 kubelet[2802]: I0302 14:27:08.046936 2802 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 14:27:08.058515 kubelet[2802]: I0302 14:27:08.057198 2802 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 2 14:27:08.067884 kubelet[2802]: I0302 14:27:08.059312 2802 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 14:27:08.067884 kubelet[2802]: I0302 14:27:08.059652 2802 reconciler.go:29] "Reconciler: start to sync state" Mar 2 14:27:08.069501 kubelet[2802]: I0302 14:27:08.068305 2802 factory.go:223] Registration of the systemd container factory successfully Mar 2 14:27:08.069501 kubelet[2802]: I0302 14:27:08.068535 2802 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 14:27:08.078247 kubelet[2802]: I0302 14:27:08.077931 2802 server.go:310] "Adding debug handlers to kubelet server" Mar 2 14:27:08.111087 kubelet[2802]: E0302 14:27:08.110913 2802 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 14:27:08.117165 kubelet[2802]: I0302 14:27:08.114284 2802 factory.go:223] Registration of the containerd container factory successfully Mar 2 14:27:08.224636 sudo[2827]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 2 14:27:08.230312 sudo[2827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 2 14:27:08.471563 kubelet[2802]: I0302 14:27:08.471217 2802 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 14:27:08.489291 kubelet[2802]: I0302 14:27:08.489255 2802 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 14:27:08.489554 kubelet[2802]: I0302 14:27:08.489540 2802 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 2 14:27:08.489644 kubelet[2802]: I0302 14:27:08.489632 2802 kubelet.go:2428] "Starting kubelet main sync loop" Mar 2 14:27:08.489926 kubelet[2802]: E0302 14:27:08.489903 2802 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 14:27:08.591238 kubelet[2802]: E0302 14:27:08.590561 2802 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 14:27:08.599206 kubelet[2802]: I0302 14:27:08.598900 2802 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 14:27:08.599206 kubelet[2802]: I0302 14:27:08.598914 2802 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 14:27:08.599206 kubelet[2802]: I0302 14:27:08.598932 2802 state_mem.go:36] "Initialized new in-memory state store" Mar 2 14:27:08.599206 kubelet[2802]: I0302 14:27:08.599077 2802 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 14:27:08.599206 kubelet[2802]: I0302 14:27:08.599091 2802 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 14:27:08.599206 kubelet[2802]: I0302 14:27:08.599110 2802 policy_none.go:49] "None policy: Start" Mar 2 14:27:08.599206 kubelet[2802]: I0302 14:27:08.599123 2802 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 14:27:08.599206 kubelet[2802]: I0302 14:27:08.599139 2802 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 14:27:08.600990 kubelet[2802]: I0302 14:27:08.599242 2802 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 2 14:27:08.600990 kubelet[2802]: I0302 14:27:08.599254 2802 policy_none.go:47] "Start" Mar 2 14:27:08.647988 kubelet[2802]: E0302 14:27:08.647342 2802 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 14:27:08.647988 kubelet[2802]: I0302 14:27:08.647966 2802 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 14:27:08.648100 kubelet[2802]: I0302 14:27:08.647983 2802 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 14:27:08.650216 kubelet[2802]: I0302 14:27:08.649072 2802 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 14:27:08.660938 kubelet[2802]: E0302 14:27:08.657881 2802 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 14:27:08.800967 kubelet[2802]: I0302 14:27:08.797144 2802 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 14:27:08.800967 kubelet[2802]: I0302 14:27:08.798134 2802 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 14:27:08.800967 kubelet[2802]: I0302 14:27:08.799010 2802 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:08.828818 kubelet[2802]: I0302 14:27:08.827606 2802 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 14:27:08.888416 kubelet[2802]: E0302 14:27:08.887351 2802 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 14:27:08.888416 kubelet[2802]: I0302 14:27:08.887559 2802 apiserver.go:52] "Watching apiserver" Mar 2 14:27:08.890001 kubelet[2802]: I0302 14:27:08.889269 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c519ff95342b92df442df672de5fafe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c519ff95342b92df442df672de5fafe\") " pod="kube-system/kube-apiserver-localhost" Mar 2 14:27:08.890001 kubelet[2802]: I0302 14:27:08.889373 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c519ff95342b92df442df672de5fafe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4c519ff95342b92df442df672de5fafe\") " pod="kube-system/kube-apiserver-localhost" Mar 2 14:27:08.890001 kubelet[2802]: I0302 14:27:08.889402 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:08.893189 kubelet[2802]: I0302 14:27:08.892837 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:08.893189 kubelet[2802]: I0302 14:27:08.892945 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:08.893189 kubelet[2802]: I0302 14:27:08.892972 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c519ff95342b92df442df672de5fafe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c519ff95342b92df442df672de5fafe\") " pod="kube-system/kube-apiserver-localhost" Mar 2 14:27:08.893189 kubelet[2802]: I0302 14:27:08.892994 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:08.893189 kubelet[2802]: I0302 14:27:08.893015 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:08.893393 kubelet[2802]: I0302 14:27:08.893037 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 2 14:27:08.926137 kubelet[2802]: I0302 14:27:08.924262 2802 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 2 14:27:08.926137 kubelet[2802]: I0302 14:27:08.924357 2802 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 14:27:08.961880 kubelet[2802]: I0302 14:27:08.961412 2802 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 14:27:09.160348 kubelet[2802]: E0302 14:27:09.154887 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:09.163198 kubelet[2802]: E0302 14:27:09.160566 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:09.188162 kubelet[2802]: E0302 14:27:09.187998 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:09.230187 sudo[2827]: pam_unix(sudo:session): session closed for user root Mar 2 14:27:09.658407 kubelet[2802]: E0302 14:27:09.658098 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:09.659257 kubelet[2802]: I0302 14:27:09.659201 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.6591806810000005 podStartE2EDuration="5.659180681s" podCreationTimestamp="2026-03-02 14:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:27:09.537586905 +0000 UTC m=+2.140979891" watchObservedRunningTime="2026-03-02 14:27:09.659180681 +0000 UTC m=+2.262573647" Mar 2 14:27:09.668402 kubelet[2802]: E0302 14:27:09.660345 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:09.668402 kubelet[2802]: I0302 14:27:09.667862 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.667847161 podStartE2EDuration="1.667847161s" podCreationTimestamp="2026-03-02 14:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:27:09.666288023 +0000 UTC m=+2.269680989" watchObservedRunningTime="2026-03-02 14:27:09.667847161 +0000 UTC m=+2.271240137" Mar 2 14:27:09.671050 kubelet[2802]: E0302 14:27:09.669663 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:10.010629 kubelet[2802]: I0302 14:27:10.010331 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.010310847 podStartE2EDuration="2.010310847s" podCreationTimestamp="2026-03-02 14:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:27:09.766329296 +0000 UTC m=+2.369722261" watchObservedRunningTime="2026-03-02 14:27:10.010310847 +0000 UTC m=+2.613703813" Mar 2 14:27:10.541257 kubelet[2802]: I0302 14:27:10.541209 2802 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 14:27:10.558104 containerd[1549]: time="2026-03-02T14:27:10.558064536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 14:27:10.563031 kubelet[2802]: I0302 14:27:10.563008 2802 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 14:27:10.666020 kubelet[2802]: E0302 14:27:10.665983 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:10.668654 kubelet[2802]: E0302 14:27:10.666249 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:11.634060 systemd[1]: Created slice kubepods-besteffort-pod2c6783de_5ba5_478a_9dea_9d51e2d116cb.slice - libcontainer container kubepods-besteffort-pod2c6783de_5ba5_478a_9dea_9d51e2d116cb.slice. Mar 2 14:27:11.647780 kubelet[2802]: I0302 14:27:11.646076 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2c6783de-5ba5-478a-9dea-9d51e2d116cb-kube-proxy\") pod \"kube-proxy-bzlws\" (UID: \"2c6783de-5ba5-478a-9dea-9d51e2d116cb\") " pod="kube-system/kube-proxy-bzlws" Mar 2 14:27:11.647780 kubelet[2802]: I0302 14:27:11.646199 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c6783de-5ba5-478a-9dea-9d51e2d116cb-lib-modules\") pod \"kube-proxy-bzlws\" (UID: \"2c6783de-5ba5-478a-9dea-9d51e2d116cb\") " pod="kube-system/kube-proxy-bzlws" Mar 2 14:27:11.647780 kubelet[2802]: I0302 14:27:11.646224 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c6783de-5ba5-478a-9dea-9d51e2d116cb-xtables-lock\") pod \"kube-proxy-bzlws\" (UID: \"2c6783de-5ba5-478a-9dea-9d51e2d116cb\") " pod="kube-system/kube-proxy-bzlws" Mar 2 14:27:11.647780 kubelet[2802]: I0302 14:27:11.646244 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsp5c\" (UniqueName: \"kubernetes.io/projected/2c6783de-5ba5-478a-9dea-9d51e2d116cb-kube-api-access-xsp5c\") pod \"kube-proxy-bzlws\" (UID: \"2c6783de-5ba5-478a-9dea-9d51e2d116cb\") " pod="kube-system/kube-proxy-bzlws" Mar 2 14:27:12.041373 kubelet[2802]: E0302 14:27:12.038917 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:12.052113 containerd[1549]: time="2026-03-02T14:27:12.047547103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzlws,Uid:2c6783de-5ba5-478a-9dea-9d51e2d116cb,Namespace:kube-system,Attempt:0,}" Mar 2 14:27:12.248993 containerd[1549]: time="2026-03-02T14:27:12.248207031Z" level=info msg="connecting to shim 14e28ea491f23a227c77e880c3bdfa129926a491db171c6185ad500d283855c8" address="unix:///run/containerd/s/ba0fff14d7f796520d194b65a94a991017fabab681f6efa38d3fa2e93978d622" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:27:12.279935 kubelet[2802]: I0302 14:27:12.272996 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-etc-cni-netd\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.279935 kubelet[2802]: I0302 14:27:12.273041 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-lib-modules\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.279935 kubelet[2802]: I0302 14:27:12.273060 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7e30928-0e34-488f-826e-eefe5d9bc161-hubble-tls\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.279935 kubelet[2802]: I0302 14:27:12.273079 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv2jz\" (UniqueName: \"kubernetes.io/projected/c7e30928-0e34-488f-826e-eefe5d9bc161-kube-api-access-dv2jz\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.279935 kubelet[2802]: I0302 14:27:12.273100 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-bpf-maps\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.279935 kubelet[2802]: I0302 14:27:12.273118 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-xtables-lock\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.280230 kubelet[2802]: I0302 14:27:12.273135 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-host-proc-sys-net\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.280230 kubelet[2802]: I0302 14:27:12.273153 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-host-proc-sys-kernel\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.280230 kubelet[2802]: I0302 14:27:12.273173 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-cilium-run\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.280230 kubelet[2802]: I0302 14:27:12.273191 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-hostproc\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.280230 kubelet[2802]: I0302 14:27:12.273211 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-cni-path\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.280230 kubelet[2802]: I0302 14:27:12.273229 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-cilium-cgroup\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.280425 kubelet[2802]: I0302 14:27:12.273247 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7e30928-0e34-488f-826e-eefe5d9bc161-clustermesh-secrets\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.280425 kubelet[2802]: I0302 14:27:12.273264 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7e30928-0e34-488f-826e-eefe5d9bc161-cilium-config-path\") pod \"cilium-57vlg\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " pod="kube-system/cilium-57vlg" Mar 2 14:27:12.315991 systemd[1]: Created slice kubepods-burstable-podc7e30928_0e34_488f_826e_eefe5d9bc161.slice - libcontainer container kubepods-burstable-podc7e30928_0e34_488f_826e_eefe5d9bc161.slice. Mar 2 14:27:12.378373 systemd[1]: Created slice kubepods-besteffort-poda8b6e507_71c8_4023_90dc_a9e9a453dfd8.slice - libcontainer container kubepods-besteffort-poda8b6e507_71c8_4023_90dc_a9e9a453dfd8.slice. Mar 2 14:27:12.512970 kubelet[2802]: I0302 14:27:12.510606 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b5c9\" (UniqueName: \"kubernetes.io/projected/a8b6e507-71c8-4023-90dc-a9e9a453dfd8-kube-api-access-5b5c9\") pod \"cilium-operator-6f9c7c5859-np5xm\" (UID: \"a8b6e507-71c8-4023-90dc-a9e9a453dfd8\") " pod="kube-system/cilium-operator-6f9c7c5859-np5xm" Mar 2 14:27:12.512970 kubelet[2802]: I0302 14:27:12.510961 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8b6e507-71c8-4023-90dc-a9e9a453dfd8-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-np5xm\" (UID: \"a8b6e507-71c8-4023-90dc-a9e9a453dfd8\") " pod="kube-system/cilium-operator-6f9c7c5859-np5xm" Mar 2 14:27:12.522106 systemd[1]: Started cri-containerd-14e28ea491f23a227c77e880c3bdfa129926a491db171c6185ad500d283855c8.scope - libcontainer container 14e28ea491f23a227c77e880c3bdfa129926a491db171c6185ad500d283855c8. Mar 2 14:27:12.687965 kubelet[2802]: E0302 14:27:12.687366 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:12.690846 containerd[1549]: time="2026-03-02T14:27:12.689194087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57vlg,Uid:c7e30928-0e34-488f-826e-eefe5d9bc161,Namespace:kube-system,Attempt:0,}" Mar 2 14:27:12.831112 kubelet[2802]: E0302 14:27:12.831074 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:12.833269 containerd[1549]: time="2026-03-02T14:27:12.832412749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzlws,Uid:2c6783de-5ba5-478a-9dea-9d51e2d116cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"14e28ea491f23a227c77e880c3bdfa129926a491db171c6185ad500d283855c8\"" Mar 2 14:27:12.837025 kubelet[2802]: E0302 14:27:12.836644 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:12.840055 containerd[1549]: time="2026-03-02T14:27:12.840005898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-np5xm,Uid:a8b6e507-71c8-4023-90dc-a9e9a453dfd8,Namespace:kube-system,Attempt:0,}" Mar 2 14:27:12.870879 containerd[1549]: time="2026-03-02T14:27:12.868347017Z" level=info msg="CreateContainer within sandbox \"14e28ea491f23a227c77e880c3bdfa129926a491db171c6185ad500d283855c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 14:27:12.894544 containerd[1549]: time="2026-03-02T14:27:12.894380981Z" level=info msg="connecting to shim f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34" address="unix:///run/containerd/s/b3a039d90971466ae8d0a8c6fd91b497b34f702773da53c512d9535b73a63935" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:27:13.074421 containerd[1549]: time="2026-03-02T14:27:13.073548828Z" level=info msg="Container 23e1d7abf83d5eebc48fd2a9b0164be5b5bdbe1322a3aff283f999701637ae31: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:27:13.122602 systemd[1]: Started cri-containerd-f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34.scope - libcontainer container f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34. Mar 2 14:27:13.164977 containerd[1549]: time="2026-03-02T14:27:13.163421726Z" level=info msg="connecting to shim 9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99" address="unix:///run/containerd/s/6890fcf6efcbe8ba088c236e12b0e54435979350723357dfa29a39d8d9d0bc7f" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:27:13.167262 containerd[1549]: time="2026-03-02T14:27:13.166951884Z" level=info msg="CreateContainer within sandbox \"14e28ea491f23a227c77e880c3bdfa129926a491db171c6185ad500d283855c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23e1d7abf83d5eebc48fd2a9b0164be5b5bdbe1322a3aff283f999701637ae31\"" Mar 2 14:27:13.176645 containerd[1549]: time="2026-03-02T14:27:13.176286609Z" level=info msg="StartContainer for \"23e1d7abf83d5eebc48fd2a9b0164be5b5bdbe1322a3aff283f999701637ae31\"" Mar 2 14:27:13.187410 containerd[1549]: time="2026-03-02T14:27:13.187215166Z" level=info msg="connecting to shim 23e1d7abf83d5eebc48fd2a9b0164be5b5bdbe1322a3aff283f999701637ae31" address="unix:///run/containerd/s/ba0fff14d7f796520d194b65a94a991017fabab681f6efa38d3fa2e93978d622" protocol=ttrpc version=3 Mar 2 14:27:13.339090 systemd[1]: Started cri-containerd-9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99.scope - libcontainer container 9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99. Mar 2 14:27:13.425832 containerd[1549]: time="2026-03-02T14:27:13.424602486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57vlg,Uid:c7e30928-0e34-488f-826e-eefe5d9bc161,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\"" Mar 2 14:27:13.425954 kubelet[2802]: E0302 14:27:13.425852 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:13.436633 containerd[1549]: time="2026-03-02T14:27:13.435413737Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 2 14:27:13.465975 systemd[1]: Started cri-containerd-23e1d7abf83d5eebc48fd2a9b0164be5b5bdbe1322a3aff283f999701637ae31.scope - libcontainer container 23e1d7abf83d5eebc48fd2a9b0164be5b5bdbe1322a3aff283f999701637ae31. Mar 2 14:27:13.876586 containerd[1549]: time="2026-03-02T14:27:13.876443441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-np5xm,Uid:a8b6e507-71c8-4023-90dc-a9e9a453dfd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99\"" Mar 2 14:27:13.892188 kubelet[2802]: E0302 14:27:13.891848 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:13.930204 containerd[1549]: time="2026-03-02T14:27:13.929295456Z" level=info msg="StartContainer for \"23e1d7abf83d5eebc48fd2a9b0164be5b5bdbe1322a3aff283f999701637ae31\" returns successfully" Mar 2 14:27:14.757104 kubelet[2802]: E0302 14:27:14.756082 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:14.847151 kubelet[2802]: E0302 14:27:14.847054 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:15.047952 kubelet[2802]: I0302 14:27:15.045462 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bzlws" podStartSLOduration=4.04544389 podStartE2EDuration="4.04544389s" podCreationTimestamp="2026-03-02 14:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:27:15.041264835 +0000 UTC m=+7.644657821" watchObservedRunningTime="2026-03-02 14:27:15.04544389 +0000 UTC m=+7.648836876" Mar 2 14:27:15.936875 kubelet[2802]: E0302 14:27:15.927289 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:15.950441 kubelet[2802]: E0302 14:27:15.948379 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:16.324163 kubelet[2802]: E0302 14:27:16.324035 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:16.953418 kubelet[2802]: E0302 14:27:16.952488 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:18.457222 kubelet[2802]: E0302 14:27:18.452561 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:18.974111 kubelet[2802]: E0302 14:27:18.974074 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:35.021942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1722266840.mount: Deactivated successfully. Mar 2 14:28:03.511971 kubelet[2802]: E0302 14:28:03.510502 2802 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.012s" Mar 2 14:28:07.856978 containerd[1549]: time="2026-03-02T14:28:07.854595576Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:28:07.872299 containerd[1549]: time="2026-03-02T14:28:07.872019143Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 2 14:28:07.883223 containerd[1549]: time="2026-03-02T14:28:07.882240879Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:28:07.898555 containerd[1549]: time="2026-03-02T14:28:07.897331888Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 54.461438167s" Mar 2 14:28:07.898555 containerd[1549]: time="2026-03-02T14:28:07.897381380Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 2 14:28:07.920114 containerd[1549]: time="2026-03-02T14:28:07.918571148Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 2 14:28:07.962208 containerd[1549]: time="2026-03-02T14:28:07.960169056Z" level=info msg="CreateContainer within sandbox \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 14:28:08.139629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1943512326.mount: Deactivated successfully. Mar 2 14:28:08.155380 containerd[1549]: time="2026-03-02T14:28:08.144580140Z" level=info msg="Container 83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:28:08.236967 containerd[1549]: time="2026-03-02T14:28:08.232380761Z" level=info msg="CreateContainer within sandbox \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\"" Mar 2 14:28:08.248639 containerd[1549]: time="2026-03-02T14:28:08.248208633Z" level=info msg="StartContainer for \"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\"" Mar 2 14:28:08.258442 containerd[1549]: time="2026-03-02T14:28:08.258399096Z" level=info msg="connecting to shim 83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1" address="unix:///run/containerd/s/b3a039d90971466ae8d0a8c6fd91b497b34f702773da53c512d9535b73a63935" protocol=ttrpc version=3 Mar 2 14:28:08.797050 systemd[1]: Started cri-containerd-83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1.scope - libcontainer container 83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1. Mar 2 14:28:09.257877 containerd[1549]: time="2026-03-02T14:28:09.257584550Z" level=info msg="StartContainer for \"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\" returns successfully" Mar 2 14:28:09.365167 systemd[1]: cri-containerd-83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1.scope: Deactivated successfully. Mar 2 14:28:09.401314 containerd[1549]: time="2026-03-02T14:28:09.401129524Z" level=info msg="received container exit event container_id:\"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\" id:\"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\" pid:3211 exited_at:{seconds:1772461689 nanos:392659452}" Mar 2 14:28:09.614326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1-rootfs.mount: Deactivated successfully. Mar 2 14:28:09.851900 kubelet[2802]: E0302 14:28:09.851380 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:10.162444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2921221516.mount: Deactivated successfully. Mar 2 14:28:10.873941 kubelet[2802]: E0302 14:28:10.873903 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:10.910282 containerd[1549]: time="2026-03-02T14:28:10.908270261Z" level=info msg="CreateContainer within sandbox \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 14:28:11.023348 containerd[1549]: time="2026-03-02T14:28:11.023065386Z" level=info msg="Container 19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:28:11.077353 containerd[1549]: time="2026-03-02T14:28:11.077310823Z" level=info msg="CreateContainer within sandbox \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\"" Mar 2 14:28:11.096214 containerd[1549]: time="2026-03-02T14:28:11.095549109Z" level=info msg="StartContainer for \"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\"" Mar 2 14:28:11.104963 containerd[1549]: time="2026-03-02T14:28:11.103345447Z" level=info msg="connecting to shim 19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae" address="unix:///run/containerd/s/b3a039d90971466ae8d0a8c6fd91b497b34f702773da53c512d9535b73a63935" protocol=ttrpc version=3 Mar 2 14:28:11.276156 systemd[1]: Started cri-containerd-19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae.scope - libcontainer container 19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae. Mar 2 14:28:11.648427 containerd[1549]: time="2026-03-02T14:28:11.646637101Z" level=info msg="StartContainer for \"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\" returns successfully" Mar 2 14:28:11.788964 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 14:28:11.798057 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 14:28:11.811578 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 2 14:28:11.830202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 14:28:11.845649 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 2 14:28:11.848592 systemd[1]: cri-containerd-19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae.scope: Deactivated successfully. Mar 2 14:28:11.868034 containerd[1549]: time="2026-03-02T14:28:11.867563319Z" level=info msg="received container exit event container_id:\"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\" id:\"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\" pid:3268 exited_at:{seconds:1772461691 nanos:867055652}" Mar 2 14:28:11.952311 kubelet[2802]: E0302 14:28:11.946896 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:12.094297 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 14:28:12.339430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae-rootfs.mount: Deactivated successfully. Mar 2 14:28:13.004153 kubelet[2802]: E0302 14:28:13.003576 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:13.063216 containerd[1549]: time="2026-03-02T14:28:13.063169428Z" level=info msg="CreateContainer within sandbox \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 14:28:13.242199 containerd[1549]: time="2026-03-02T14:28:13.240144109Z" level=info msg="Container d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:28:13.325538 containerd[1549]: time="2026-03-02T14:28:13.325338679Z" level=info msg="CreateContainer within sandbox \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\"" Mar 2 14:28:13.337652 containerd[1549]: time="2026-03-02T14:28:13.332659871Z" level=info msg="StartContainer for \"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\"" Mar 2 14:28:13.346375 containerd[1549]: time="2026-03-02T14:28:13.343822269Z" level=info msg="connecting to shim d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec" address="unix:///run/containerd/s/b3a039d90971466ae8d0a8c6fd91b497b34f702773da53c512d9535b73a63935" protocol=ttrpc version=3 Mar 2 14:28:13.517828 systemd[1]: Started cri-containerd-d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec.scope - libcontainer container d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec. Mar 2 14:28:14.103250 containerd[1549]: time="2026-03-02T14:28:14.099255781Z" level=info msg="StartContainer for \"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\" returns successfully" Mar 2 14:28:14.152174 systemd[1]: cri-containerd-d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec.scope: Deactivated successfully. Mar 2 14:28:14.286156 containerd[1549]: time="2026-03-02T14:28:14.279397053Z" level=info msg="received container exit event container_id:\"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\" id:\"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\" pid:3313 exited_at:{seconds:1772461694 nanos:273324801}" Mar 2 14:28:14.598200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec-rootfs.mount: Deactivated successfully. Mar 2 14:28:15.204827 kubelet[2802]: E0302 14:28:15.204238 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:15.313007 containerd[1549]: time="2026-03-02T14:28:15.312613943Z" level=info msg="CreateContainer within sandbox \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 14:28:15.519148 containerd[1549]: time="2026-03-02T14:28:15.514133649Z" level=info msg="Container 12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:28:15.529213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1563990195.mount: Deactivated successfully. Mar 2 14:28:15.604029 containerd[1549]: time="2026-03-02T14:28:15.602637474Z" level=info msg="CreateContainer within sandbox \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\"" Mar 2 14:28:15.609356 containerd[1549]: time="2026-03-02T14:28:15.609063033Z" level=info msg="StartContainer for \"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\"" Mar 2 14:28:15.611456 containerd[1549]: time="2026-03-02T14:28:15.610562074Z" level=info msg="connecting to shim 12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e" address="unix:///run/containerd/s/b3a039d90971466ae8d0a8c6fd91b497b34f702773da53c512d9535b73a63935" protocol=ttrpc version=3 Mar 2 14:28:15.757440 systemd[1]: Started cri-containerd-12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e.scope - libcontainer container 12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e. Mar 2 14:28:16.010491 systemd[1]: cri-containerd-12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e.scope: Deactivated successfully. Mar 2 14:28:16.023835 containerd[1549]: time="2026-03-02T14:28:16.023034516Z" level=info msg="received container exit event container_id:\"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\" id:\"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\" pid:3355 exited_at:{seconds:1772461696 nanos:16632894}" Mar 2 14:28:16.046039 containerd[1549]: time="2026-03-02T14:28:16.044592024Z" level=info msg="StartContainer for \"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\" returns successfully" Mar 2 14:28:16.227993 kubelet[2802]: E0302 14:28:16.220857 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:16.350571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e-rootfs.mount: Deactivated successfully. Mar 2 14:28:17.272277 kubelet[2802]: E0302 14:28:17.270282 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:17.345649 containerd[1549]: time="2026-03-02T14:28:17.345338665Z" level=info msg="CreateContainer within sandbox \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 14:28:17.505367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614747736.mount: Deactivated successfully. Mar 2 14:28:17.545990 containerd[1549]: time="2026-03-02T14:28:17.543610256Z" level=info msg="Container a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:28:17.581266 containerd[1549]: time="2026-03-02T14:28:17.579166903Z" level=info msg="CreateContainer within sandbox \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\"" Mar 2 14:28:17.589390 containerd[1549]: time="2026-03-02T14:28:17.587982892Z" level=info msg="StartContainer for \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\"" Mar 2 14:28:17.598068 containerd[1549]: time="2026-03-02T14:28:17.598030022Z" level=info msg="connecting to shim a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5" address="unix:///run/containerd/s/b3a039d90971466ae8d0a8c6fd91b497b34f702773da53c512d9535b73a63935" protocol=ttrpc version=3 Mar 2 14:28:17.759313 systemd[1]: Started cri-containerd-a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5.scope - libcontainer container a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5. Mar 2 14:28:17.818278 containerd[1549]: time="2026-03-02T14:28:17.814877845Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:28:17.826081 containerd[1549]: time="2026-03-02T14:28:17.825040742Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 2 14:28:17.833608 containerd[1549]: time="2026-03-02T14:28:17.831044202Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:28:17.842528 containerd[1549]: time="2026-03-02T14:28:17.842069946Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 9.92345086s" Mar 2 14:28:17.842528 containerd[1549]: time="2026-03-02T14:28:17.842117856Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 2 14:28:17.870990 containerd[1549]: time="2026-03-02T14:28:17.860639986Z" level=info msg="CreateContainer within sandbox \"9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 2 14:28:17.940152 containerd[1549]: time="2026-03-02T14:28:17.939492619Z" level=info msg="Container 45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:28:17.984780 containerd[1549]: time="2026-03-02T14:28:17.983639843Z" level=info msg="CreateContainer within sandbox \"9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\"" Mar 2 14:28:17.998905 containerd[1549]: time="2026-03-02T14:28:17.998865201Z" level=info msg="StartContainer for \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\"" Mar 2 14:28:18.009665 containerd[1549]: time="2026-03-02T14:28:18.009294085Z" level=info msg="connecting to shim 45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647" address="unix:///run/containerd/s/6890fcf6efcbe8ba088c236e12b0e54435979350723357dfa29a39d8d9d0bc7f" protocol=ttrpc version=3 Mar 2 14:28:18.166887 containerd[1549]: time="2026-03-02T14:28:18.164993835Z" level=info msg="StartContainer for \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\" returns successfully" Mar 2 14:28:18.321376 systemd[1]: Started cri-containerd-45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647.scope - libcontainer container 45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647. Mar 2 14:28:18.776766 containerd[1549]: time="2026-03-02T14:28:18.776219169Z" level=info msg="StartContainer for \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\" returns successfully" Mar 2 14:28:19.053588 kubelet[2802]: I0302 14:28:19.053334 2802 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 2 14:28:19.465132 kubelet[2802]: E0302 14:28:19.463505 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:19.465132 kubelet[2802]: E0302 14:28:19.463908 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:19.510085 systemd[1]: Created slice kubepods-burstable-podae047774_0ca8_4376_9475_24d5c865f9d1.slice - libcontainer container kubepods-burstable-podae047774_0ca8_4376_9475_24d5c865f9d1.slice. Mar 2 14:28:19.594888 kubelet[2802]: I0302 14:28:19.583629 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae047774-0ca8-4376-9475-24d5c865f9d1-config-volume\") pod \"coredns-66bc5c9577-wfjxl\" (UID: \"ae047774-0ca8-4376-9475-24d5c865f9d1\") " pod="kube-system/coredns-66bc5c9577-wfjxl" Mar 2 14:28:19.594888 kubelet[2802]: I0302 14:28:19.590213 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2ce7a48-59e7-483b-82e3-d9eeabc7d63d-config-volume\") pod \"coredns-66bc5c9577-rxs5l\" (UID: \"e2ce7a48-59e7-483b-82e3-d9eeabc7d63d\") " pod="kube-system/coredns-66bc5c9577-rxs5l" Mar 2 14:28:19.594888 kubelet[2802]: I0302 14:28:19.590262 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqxvh\" (UniqueName: \"kubernetes.io/projected/ae047774-0ca8-4376-9475-24d5c865f9d1-kube-api-access-fqxvh\") pod \"coredns-66bc5c9577-wfjxl\" (UID: \"ae047774-0ca8-4376-9475-24d5c865f9d1\") " pod="kube-system/coredns-66bc5c9577-wfjxl" Mar 2 14:28:19.594888 kubelet[2802]: I0302 14:28:19.590323 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhfwm\" (UniqueName: \"kubernetes.io/projected/e2ce7a48-59e7-483b-82e3-d9eeabc7d63d-kube-api-access-vhfwm\") pod \"coredns-66bc5c9577-rxs5l\" (UID: \"e2ce7a48-59e7-483b-82e3-d9eeabc7d63d\") " pod="kube-system/coredns-66bc5c9577-rxs5l" Mar 2 14:28:19.591242 systemd[1]: Created slice kubepods-burstable-pode2ce7a48_59e7_483b_82e3_d9eeabc7d63d.slice - libcontainer container kubepods-burstable-pode2ce7a48_59e7_483b_82e3_d9eeabc7d63d.slice. Mar 2 14:28:19.892367 kubelet[2802]: I0302 14:28:19.889494 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-57vlg" podStartSLOduration=13.411465794 podStartE2EDuration="1m7.889470621s" podCreationTimestamp="2026-03-02 14:27:12 +0000 UTC" firstStartedPulling="2026-03-02 14:27:13.43009588 +0000 UTC m=+6.033488846" lastFinishedPulling="2026-03-02 14:28:07.908100707 +0000 UTC m=+60.511493673" observedRunningTime="2026-03-02 14:28:19.672403631 +0000 UTC m=+72.275796617" watchObservedRunningTime="2026-03-02 14:28:19.889470621 +0000 UTC m=+72.492863608" Mar 2 14:28:19.992272 kubelet[2802]: E0302 14:28:19.990088 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:19.993436 containerd[1549]: time="2026-03-02T14:28:19.992415899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rxs5l,Uid:e2ce7a48-59e7-483b-82e3-d9eeabc7d63d,Namespace:kube-system,Attempt:0,}" Mar 2 14:28:20.092508 kubelet[2802]: I0302 14:28:20.081815 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-np5xm" podStartSLOduration=4.132947681 podStartE2EDuration="1m8.081647514s" podCreationTimestamp="2026-03-02 14:27:12 +0000 UTC" firstStartedPulling="2026-03-02 14:27:13.895953277 +0000 UTC m=+6.499346242" lastFinishedPulling="2026-03-02 14:28:17.844653108 +0000 UTC m=+70.448046075" observedRunningTime="2026-03-02 14:28:19.901868597 +0000 UTC m=+72.505261562" watchObservedRunningTime="2026-03-02 14:28:20.081647514 +0000 UTC m=+72.685040480" Mar 2 14:28:20.174070 kubelet[2802]: E0302 14:28:20.171497 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:20.181443 containerd[1549]: time="2026-03-02T14:28:20.181265744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wfjxl,Uid:ae047774-0ca8-4376-9475-24d5c865f9d1,Namespace:kube-system,Attempt:0,}" Mar 2 14:28:20.478222 kubelet[2802]: E0302 14:28:20.476881 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:20.706183 kubelet[2802]: E0302 14:28:20.702609 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:25.628158 systemd-networkd[1452]: cilium_host: Link UP Mar 2 14:28:25.635364 systemd-networkd[1452]: cilium_net: Link UP Mar 2 14:28:25.636568 systemd-networkd[1452]: cilium_net: Gained carrier Mar 2 14:28:25.637589 systemd-networkd[1452]: cilium_host: Gained carrier Mar 2 14:28:25.801096 systemd-networkd[1452]: cilium_host: Gained IPv6LL Mar 2 14:28:26.261341 systemd-networkd[1452]: cilium_net: Gained IPv6LL Mar 2 14:28:26.648386 systemd-networkd[1452]: cilium_vxlan: Link UP Mar 2 14:28:26.649462 systemd-networkd[1452]: cilium_vxlan: Gained carrier Mar 2 14:28:27.498361 kubelet[2802]: E0302 14:28:27.497648 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:27.959313 kernel: NET: Registered PF_ALG protocol family Mar 2 14:28:28.246392 systemd-networkd[1452]: cilium_vxlan: Gained IPv6LL Mar 2 14:28:31.498559 kubelet[2802]: E0302 14:28:31.493570 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:32.492589 kubelet[2802]: E0302 14:28:32.492001 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:33.295483 systemd-networkd[1452]: lxc_health: Link UP Mar 2 14:28:33.315476 systemd-networkd[1452]: lxc_health: Gained carrier Mar 2 14:28:33.840565 kernel: eth0: renamed from tmp354e3 Mar 2 14:28:33.846621 systemd-networkd[1452]: lxceaa43496a951: Link UP Mar 2 14:28:33.855885 systemd-networkd[1452]: lxceaa43496a951: Gained carrier Mar 2 14:28:34.428051 systemd-networkd[1452]: lxc73b68fc11052: Link UP Mar 2 14:28:34.445530 kernel: eth0: renamed from tmp76307 Mar 2 14:28:34.482879 systemd-networkd[1452]: lxc73b68fc11052: Gained carrier Mar 2 14:28:34.692196 kubelet[2802]: E0302 14:28:34.691608 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:34.825611 kubelet[2802]: E0302 14:28:34.825574 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:35.097534 systemd-networkd[1452]: lxceaa43496a951: Gained IPv6LL Mar 2 14:28:35.222587 systemd-networkd[1452]: lxc_health: Gained IPv6LL Mar 2 14:28:35.832997 kubelet[2802]: E0302 14:28:35.832474 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:36.180911 systemd-networkd[1452]: lxc73b68fc11052: Gained IPv6LL Mar 2 14:28:44.017373 sudo[1767]: pam_unix(sudo:session): session closed for user root Mar 2 14:28:44.057865 sshd[1766]: Connection closed by 10.0.0.1 port 56232 Mar 2 14:28:44.069440 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Mar 2 14:28:44.117067 systemd[1]: sshd@6-10.0.0.8:22-10.0.0.1:56232.service: Deactivated successfully. Mar 2 14:28:44.136961 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 14:28:44.137490 systemd[1]: session-7.scope: Consumed 13.712s CPU time, 242.5M memory peak. Mar 2 14:28:44.165444 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. Mar 2 14:28:44.219574 systemd-logind[1534]: Removed session 7. Mar 2 14:28:46.516007 kubelet[2802]: E0302 14:28:46.515603 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:56.956200 containerd[1549]: time="2026-03-02T14:28:56.955917653Z" level=info msg="connecting to shim 354e3fdbe43a0e7d2fd5c4f9177c38357a7a860805d27038a9085e06367cb712" address="unix:///run/containerd/s/1dc6a669968f6f6c4a090def00fee0abc4106f93bbe25d0a3406ccb8e32fbdfc" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:28:56.994063 containerd[1549]: time="2026-03-02T14:28:56.994011210Z" level=info msg="connecting to shim 76307d65fec00197b27c00d822e34709e7d37d39dcd5c3fbe46fdfce1a56b7c9" address="unix:///run/containerd/s/3a0756cfa1203e50ab118e652308a9667d7715db4ce9cd5e9efe4cac930fb8ba" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:28:57.206078 systemd[1]: Started cri-containerd-76307d65fec00197b27c00d822e34709e7d37d39dcd5c3fbe46fdfce1a56b7c9.scope - libcontainer container 76307d65fec00197b27c00d822e34709e7d37d39dcd5c3fbe46fdfce1a56b7c9. Mar 2 14:28:57.371115 systemd[1]: Started cri-containerd-354e3fdbe43a0e7d2fd5c4f9177c38357a7a860805d27038a9085e06367cb712.scope - libcontainer container 354e3fdbe43a0e7d2fd5c4f9177c38357a7a860805d27038a9085e06367cb712. Mar 2 14:28:57.427168 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 14:28:57.544203 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 14:28:57.764157 containerd[1549]: time="2026-03-02T14:28:57.764118144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rxs5l,Uid:e2ce7a48-59e7-483b-82e3-d9eeabc7d63d,Namespace:kube-system,Attempt:0,} returns sandbox id \"76307d65fec00197b27c00d822e34709e7d37d39dcd5c3fbe46fdfce1a56b7c9\"" Mar 2 14:28:57.773512 kubelet[2802]: E0302 14:28:57.766053 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:57.830597 containerd[1549]: time="2026-03-02T14:28:57.825024889Z" level=info msg="CreateContainer within sandbox \"76307d65fec00197b27c00d822e34709e7d37d39dcd5c3fbe46fdfce1a56b7c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 14:28:57.940301 containerd[1549]: time="2026-03-02T14:28:57.940252571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wfjxl,Uid:ae047774-0ca8-4376-9475-24d5c865f9d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"354e3fdbe43a0e7d2fd5c4f9177c38357a7a860805d27038a9085e06367cb712\"" Mar 2 14:28:57.954073 kubelet[2802]: E0302 14:28:57.953237 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:58.012044 containerd[1549]: time="2026-03-02T14:28:58.011996667Z" level=info msg="CreateContainer within sandbox \"354e3fdbe43a0e7d2fd5c4f9177c38357a7a860805d27038a9085e06367cb712\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 14:28:58.036253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2713025695.mount: Deactivated successfully. Mar 2 14:28:58.096660 containerd[1549]: time="2026-03-02T14:28:58.079002436Z" level=info msg="Container a8c0375ea6144104dc4910059985fb915d97436bfebf997e0f88c0ea16b2c22f: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:28:58.107652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4092311626.mount: Deactivated successfully. Mar 2 14:28:58.122859 containerd[1549]: time="2026-03-02T14:28:58.121465841Z" level=info msg="CreateContainer within sandbox \"76307d65fec00197b27c00d822e34709e7d37d39dcd5c3fbe46fdfce1a56b7c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8c0375ea6144104dc4910059985fb915d97436bfebf997e0f88c0ea16b2c22f\"" Mar 2 14:28:58.125045 containerd[1549]: time="2026-03-02T14:28:58.125020178Z" level=info msg="StartContainer for \"a8c0375ea6144104dc4910059985fb915d97436bfebf997e0f88c0ea16b2c22f\"" Mar 2 14:28:58.126980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204016851.mount: Deactivated successfully. Mar 2 14:28:58.134316 containerd[1549]: time="2026-03-02T14:28:58.134284719Z" level=info msg="connecting to shim a8c0375ea6144104dc4910059985fb915d97436bfebf997e0f88c0ea16b2c22f" address="unix:///run/containerd/s/3a0756cfa1203e50ab118e652308a9667d7715db4ce9cd5e9efe4cac930fb8ba" protocol=ttrpc version=3 Mar 2 14:28:58.135640 containerd[1549]: time="2026-03-02T14:28:58.135614098Z" level=info msg="Container e4ebd97868793d9e58a5c8096cc975441a8d29cf18c71a19e73e20a083a382c9: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:28:58.298904 containerd[1549]: time="2026-03-02T14:28:58.294952710Z" level=info msg="CreateContainer within sandbox \"354e3fdbe43a0e7d2fd5c4f9177c38357a7a860805d27038a9085e06367cb712\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4ebd97868793d9e58a5c8096cc975441a8d29cf18c71a19e73e20a083a382c9\"" Mar 2 14:28:58.325859 containerd[1549]: time="2026-03-02T14:28:58.324082770Z" level=info msg="StartContainer for \"e4ebd97868793d9e58a5c8096cc975441a8d29cf18c71a19e73e20a083a382c9\"" Mar 2 14:28:58.341074 systemd[1]: Started cri-containerd-a8c0375ea6144104dc4910059985fb915d97436bfebf997e0f88c0ea16b2c22f.scope - libcontainer container a8c0375ea6144104dc4910059985fb915d97436bfebf997e0f88c0ea16b2c22f. Mar 2 14:28:58.415439 containerd[1549]: time="2026-03-02T14:28:58.409534764Z" level=info msg="connecting to shim e4ebd97868793d9e58a5c8096cc975441a8d29cf18c71a19e73e20a083a382c9" address="unix:///run/containerd/s/1dc6a669968f6f6c4a090def00fee0abc4106f93bbe25d0a3406ccb8e32fbdfc" protocol=ttrpc version=3 Mar 2 14:28:58.559146 systemd[1]: Started cri-containerd-e4ebd97868793d9e58a5c8096cc975441a8d29cf18c71a19e73e20a083a382c9.scope - libcontainer container e4ebd97868793d9e58a5c8096cc975441a8d29cf18c71a19e73e20a083a382c9. Mar 2 14:28:58.663579 containerd[1549]: time="2026-03-02T14:28:58.663270893Z" level=info msg="StartContainer for \"a8c0375ea6144104dc4910059985fb915d97436bfebf997e0f88c0ea16b2c22f\" returns successfully" Mar 2 14:28:58.911898 containerd[1549]: time="2026-03-02T14:28:58.911218674Z" level=info msg="StartContainer for \"e4ebd97868793d9e58a5c8096cc975441a8d29cf18c71a19e73e20a083a382c9\" returns successfully" Mar 2 14:28:59.441537 kubelet[2802]: E0302 14:28:59.439599 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:59.482295 kubelet[2802]: E0302 14:28:59.481913 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:59.567774 kubelet[2802]: I0302 14:28:59.545208 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wfjxl" podStartSLOduration=108.545191532 podStartE2EDuration="1m48.545191532s" podCreationTimestamp="2026-03-02 14:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:28:59.523862039 +0000 UTC m=+112.127255025" watchObservedRunningTime="2026-03-02 14:28:59.545191532 +0000 UTC m=+112.148584497" Mar 2 14:28:59.683210 kubelet[2802]: I0302 14:28:59.681158 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rxs5l" podStartSLOduration=108.681138075 podStartE2EDuration="1m48.681138075s" podCreationTimestamp="2026-03-02 14:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:28:59.669158157 +0000 UTC m=+112.272551154" watchObservedRunningTime="2026-03-02 14:28:59.681138075 +0000 UTC m=+112.284531060" Mar 2 14:29:00.508806 kubelet[2802]: E0302 14:29:00.504003 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:00.508806 kubelet[2802]: E0302 14:29:00.505104 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:01.516131 kubelet[2802]: E0302 14:29:01.511477 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:01.516131 kubelet[2802]: E0302 14:29:01.512006 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:36.493231 kubelet[2802]: E0302 14:29:36.492938 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:44.510248 kubelet[2802]: E0302 14:29:44.510213 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:46.495596 kubelet[2802]: E0302 14:29:46.494984 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:47.493021 kubelet[2802]: E0302 14:29:47.492982 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:48.521297 kubelet[2802]: E0302 14:29:48.521261 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:54.532259 kubelet[2802]: E0302 14:29:54.531289 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:30:04.955393 update_engine[1535]: I20260302 14:30:04.951275 1535 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 2 14:30:04.958450 update_engine[1535]: I20260302 14:30:04.955410 1535 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 2 14:30:04.960359 update_engine[1535]: I20260302 14:30:04.960230 1535 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 2 14:30:04.975258 update_engine[1535]: I20260302 14:30:04.973291 1535 omaha_request_params.cc:62] Current group set to stable Mar 2 14:30:04.976930 update_engine[1535]: I20260302 14:30:04.976904 1535 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 2 14:30:04.980218 update_engine[1535]: I20260302 14:30:04.977495 1535 update_attempter.cc:643] Scheduling an action processor start. Mar 2 14:30:04.980218 update_engine[1535]: I20260302 14:30:04.977614 1535 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 2 14:30:04.980218 update_engine[1535]: I20260302 14:30:04.978072 1535 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 2 14:30:04.980218 update_engine[1535]: I20260302 14:30:04.978369 1535 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 2 14:30:04.980218 update_engine[1535]: I20260302 14:30:04.978385 1535 omaha_request_action.cc:272] Request: Mar 2 14:30:04.980218 update_engine[1535]: Mar 2 14:30:04.980218 update_engine[1535]: Mar 2 14:30:04.980218 update_engine[1535]: Mar 2 14:30:04.980218 update_engine[1535]: Mar 2 14:30:04.980218 update_engine[1535]: Mar 2 14:30:04.980218 update_engine[1535]: Mar 2 14:30:04.980218 update_engine[1535]: Mar 2 14:30:04.980218 update_engine[1535]: Mar 2 14:30:04.980218 update_engine[1535]: I20260302 14:30:04.979833 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 14:30:05.009846 locksmithd[1571]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 2 14:30:05.013879 update_engine[1535]: I20260302 14:30:05.011111 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 14:30:05.015315 update_engine[1535]: I20260302 14:30:05.015038 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 14:30:05.035480 update_engine[1535]: E20260302 14:30:05.035264 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 14:30:05.035480 update_engine[1535]: I20260302 14:30:05.035408 1535 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 2 14:30:06.515839 kubelet[2802]: E0302 14:30:06.514487 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:30:10.512478 kubelet[2802]: E0302 14:30:10.508115 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:30:14.899961 update_engine[1535]: I20260302 14:30:14.897953 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 14:30:14.899961 update_engine[1535]: I20260302 14:30:14.898077 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 14:30:14.901380 update_engine[1535]: I20260302 14:30:14.901348 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 14:30:14.931137 update_engine[1535]: E20260302 14:30:14.931072 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 14:30:14.931407 update_engine[1535]: I20260302 14:30:14.931382 1535 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 2 14:30:24.890845 update_engine[1535]: I20260302 14:30:24.890655 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 14:30:24.892384 update_engine[1535]: I20260302 14:30:24.891826 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 14:30:24.892384 update_engine[1535]: I20260302 14:30:24.892340 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 14:30:24.939211 update_engine[1535]: E20260302 14:30:24.939014 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 14:30:24.939211 update_engine[1535]: I20260302 14:30:24.939158 1535 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 2 14:30:34.890560 update_engine[1535]: I20260302 14:30:34.889912 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 14:30:34.890560 update_engine[1535]: I20260302 14:30:34.890187 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 14:30:34.938980 update_engine[1535]: I20260302 14:30:34.901387 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 14:30:34.939999 update_engine[1535]: E20260302 14:30:34.939944 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 14:30:34.940182 update_engine[1535]: I20260302 14:30:34.940158 1535 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 2 14:30:34.940256 update_engine[1535]: I20260302 14:30:34.940237 1535 omaha_request_action.cc:617] Omaha request response: Mar 2 14:30:34.940431 update_engine[1535]: E20260302 14:30:34.940404 1535 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 2 14:30:34.940517 update_engine[1535]: I20260302 14:30:34.940497 1535 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 2 14:30:34.945499 update_engine[1535]: I20260302 14:30:34.940568 1535 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 2 14:30:34.947928 update_engine[1535]: I20260302 14:30:34.947882 1535 update_attempter.cc:306] Processing Done. Mar 2 14:30:34.948212 update_engine[1535]: E20260302 14:30:34.948187 1535 update_attempter.cc:619] Update failed. Mar 2 14:30:34.948342 update_engine[1535]: I20260302 14:30:34.948319 1535 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 2 14:30:34.948410 update_engine[1535]: I20260302 14:30:34.948392 1535 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 2 14:30:34.948483 update_engine[1535]: I20260302 14:30:34.948464 1535 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 2 14:30:34.949221 update_engine[1535]: I20260302 14:30:34.949195 1535 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 2 14:30:34.949315 update_engine[1535]: I20260302 14:30:34.949293 1535 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 2 14:30:34.949387 update_engine[1535]: I20260302 14:30:34.949366 1535 omaha_request_action.cc:272] Request: Mar 2 14:30:34.949387 update_engine[1535]: Mar 2 14:30:34.949387 update_engine[1535]: Mar 2 14:30:34.949387 update_engine[1535]: Mar 2 14:30:34.949387 update_engine[1535]: Mar 2 14:30:34.949387 update_engine[1535]: Mar 2 14:30:34.949387 update_engine[1535]: Mar 2 14:30:34.949855 update_engine[1535]: I20260302 14:30:34.949829 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 14:30:34.949950 update_engine[1535]: I20260302 14:30:34.949931 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 14:30:34.961904 locksmithd[1571]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 2 14:30:34.978185 update_engine[1535]: I20260302 14:30:34.974239 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 14:30:35.005910 update_engine[1535]: E20260302 14:30:35.004376 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 14:30:35.005910 update_engine[1535]: I20260302 14:30:35.004879 1535 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 2 14:30:35.005910 update_engine[1535]: I20260302 14:30:35.004902 1535 omaha_request_action.cc:617] Omaha request response: Mar 2 14:30:35.005910 update_engine[1535]: I20260302 14:30:35.004915 1535 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 2 14:30:35.005910 update_engine[1535]: I20260302 14:30:35.004925 1535 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 2 14:30:35.005910 update_engine[1535]: I20260302 14:30:35.004940 1535 update_attempter.cc:306] Processing Done. Mar 2 14:30:35.005910 update_engine[1535]: I20260302 14:30:35.004952 1535 update_attempter.cc:310] Error event sent. Mar 2 14:30:35.005910 update_engine[1535]: I20260302 14:30:35.004969 1535 update_check_scheduler.cc:74] Next update check in 49m57s Mar 2 14:30:35.006442 locksmithd[1571]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 2 14:30:51.495515 kubelet[2802]: E0302 14:30:51.495068 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:30:55.500384 kubelet[2802]: E0302 14:30:55.495356 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:30:56.501881 kubelet[2802]: E0302 14:30:56.499580 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:30:56.512478 kubelet[2802]: E0302 14:30:56.512114 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:30:59.497027 kubelet[2802]: E0302 14:30:59.495088 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:12.519394 kubelet[2802]: E0302 14:31:12.493633 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:21.508454 kubelet[2802]: E0302 14:31:21.503512 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:32.514864 kubelet[2802]: E0302 14:31:32.514127 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:53.367445 containerd[1549]: time="2026-03-02T14:31:53.366426345Z" level=warning msg="container event discarded" container=875ae3fbbaa6bf39d6af4a159003e6a8a357170ae703949e79625cd3b48c335e type=CONTAINER_CREATED_EVENT Mar 2 14:31:53.367445 containerd[1549]: time="2026-03-02T14:31:53.366575705Z" level=warning msg="container event discarded" container=875ae3fbbaa6bf39d6af4a159003e6a8a357170ae703949e79625cd3b48c335e type=CONTAINER_STARTED_EVENT Mar 2 14:31:53.392774 containerd[1549]: time="2026-03-02T14:31:53.392582301Z" level=warning msg="container event discarded" container=3c25c8ee91d590bfd15caa13e7e1c5722a1672716624b033c63b86e7e1b36c11 type=CONTAINER_CREATED_EVENT Mar 2 14:31:53.392774 containerd[1549]: time="2026-03-02T14:31:53.392636533Z" level=warning msg="container event discarded" container=3c25c8ee91d590bfd15caa13e7e1c5722a1672716624b033c63b86e7e1b36c11 type=CONTAINER_STARTED_EVENT Mar 2 14:31:53.452468 containerd[1549]: time="2026-03-02T14:31:53.452217918Z" level=warning msg="container event discarded" container=f06fb0b3735699fde33090de38e9cf4922f4c148ee68a9e55ca9baf457127186 type=CONTAINER_CREATED_EVENT Mar 2 14:31:53.452468 containerd[1549]: time="2026-03-02T14:31:53.452420167Z" level=warning msg="container event discarded" container=f06fb0b3735699fde33090de38e9cf4922f4c148ee68a9e55ca9baf457127186 type=CONTAINER_STARTED_EVENT Mar 2 14:31:53.537954 containerd[1549]: time="2026-03-02T14:31:53.537802862Z" level=warning msg="container event discarded" container=63784a8a9e1fecbd2662fc4635999079fb5a4406d7f5aa6a67a056a9812dbf5b type=CONTAINER_CREATED_EVENT Mar 2 14:31:53.562250 containerd[1549]: time="2026-03-02T14:31:53.561857947Z" level=warning msg="container event discarded" container=ff680a6dbe090d1a9c11cdd54681ab4307cfb49777c85c98c81b82e8bcca373e type=CONTAINER_CREATED_EVENT Mar 2 14:31:53.602042 containerd[1549]: time="2026-03-02T14:31:53.601404064Z" level=warning msg="container event discarded" container=fe7212f5f1698eca0591eb1675bb31c7ee86d6d84c87475156125a526b3d2d84 type=CONTAINER_CREATED_EVENT Mar 2 14:31:54.083849 containerd[1549]: time="2026-03-02T14:31:54.083464295Z" level=warning msg="container event discarded" container=63784a8a9e1fecbd2662fc4635999079fb5a4406d7f5aa6a67a056a9812dbf5b type=CONTAINER_STARTED_EVENT Mar 2 14:31:54.144267 containerd[1549]: time="2026-03-02T14:31:54.144052213Z" level=warning msg="container event discarded" container=ff680a6dbe090d1a9c11cdd54681ab4307cfb49777c85c98c81b82e8bcca373e type=CONTAINER_STARTED_EVENT Mar 2 14:31:54.205411 containerd[1549]: time="2026-03-02T14:31:54.203462120Z" level=warning msg="container event discarded" container=fe7212f5f1698eca0591eb1675bb31c7ee86d6d84c87475156125a526b3d2d84 type=CONTAINER_STARTED_EVENT Mar 2 14:32:12.503629 kubelet[2802]: E0302 14:32:12.496875 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:32:12.843304 containerd[1549]: time="2026-03-02T14:32:12.842855199Z" level=warning msg="container event discarded" container=14e28ea491f23a227c77e880c3bdfa129926a491db171c6185ad500d283855c8 type=CONTAINER_CREATED_EVENT Mar 2 14:32:12.843304 containerd[1549]: time="2026-03-02T14:32:12.842958021Z" level=warning msg="container event discarded" container=14e28ea491f23a227c77e880c3bdfa129926a491db171c6185ad500d283855c8 type=CONTAINER_STARTED_EVENT Mar 2 14:32:13.176175 containerd[1549]: time="2026-03-02T14:32:13.170604896Z" level=warning msg="container event discarded" container=23e1d7abf83d5eebc48fd2a9b0164be5b5bdbe1322a3aff283f999701637ae31 type=CONTAINER_CREATED_EVENT Mar 2 14:32:13.440347 containerd[1549]: time="2026-03-02T14:32:13.439173490Z" level=warning msg="container event discarded" container=f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34 type=CONTAINER_CREATED_EVENT Mar 2 14:32:13.440347 containerd[1549]: time="2026-03-02T14:32:13.439265092Z" level=warning msg="container event discarded" container=f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34 type=CONTAINER_STARTED_EVENT Mar 2 14:32:13.895920 containerd[1549]: time="2026-03-02T14:32:13.888893300Z" level=warning msg="container event discarded" container=9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99 type=CONTAINER_CREATED_EVENT Mar 2 14:32:13.895920 containerd[1549]: time="2026-03-02T14:32:13.889153295Z" level=warning msg="container event discarded" container=9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99 type=CONTAINER_STARTED_EVENT Mar 2 14:32:13.943810 containerd[1549]: time="2026-03-02T14:32:13.943636651Z" level=warning msg="container event discarded" container=23e1d7abf83d5eebc48fd2a9b0164be5b5bdbe1322a3aff283f999701637ae31 type=CONTAINER_STARTED_EVENT Mar 2 14:32:15.504582 kubelet[2802]: E0302 14:32:15.501410 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:32:16.498167 kubelet[2802]: E0302 14:32:16.495563 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:32:21.496206 kubelet[2802]: E0302 14:32:21.493136 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:32:24.494417 kubelet[2802]: E0302 14:32:24.492948 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:32:27.498616 systemd[1]: Started sshd@7-10.0.0.8:22-10.0.0.1:42610.service - OpenSSH per-connection server daemon (10.0.0.1:42610). Mar 2 14:32:27.971914 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 42610 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:27.978159 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:28.061394 systemd-logind[1534]: New session 8 of user core. Mar 2 14:32:28.068380 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 14:32:28.842439 sshd[4365]: Connection closed by 10.0.0.1 port 42610 Mar 2 14:32:28.845801 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:28.864891 systemd[1]: sshd@7-10.0.0.8:22-10.0.0.1:42610.service: Deactivated successfully. Mar 2 14:32:28.906862 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 14:32:28.933156 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. Mar 2 14:32:28.943947 systemd-logind[1534]: Removed session 8. Mar 2 14:32:33.918252 systemd[1]: Started sshd@8-10.0.0.8:22-10.0.0.1:38252.service - OpenSSH per-connection server daemon (10.0.0.1:38252). Mar 2 14:32:34.216069 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 38252 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:34.227238 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:34.255282 systemd-logind[1534]: New session 9 of user core. Mar 2 14:32:34.279614 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 14:32:34.825334 sshd[4391]: Connection closed by 10.0.0.1 port 38252 Mar 2 14:32:34.826042 sshd-session[4388]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:34.841532 systemd[1]: sshd@8-10.0.0.8:22-10.0.0.1:38252.service: Deactivated successfully. Mar 2 14:32:34.850001 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 14:32:34.858206 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. Mar 2 14:32:34.866446 systemd-logind[1534]: Removed session 9. Mar 2 14:32:37.491120 kubelet[2802]: E0302 14:32:37.491013 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:32:39.850444 systemd[1]: Started sshd@9-10.0.0.8:22-10.0.0.1:38268.service - OpenSSH per-connection server daemon (10.0.0.1:38268). Mar 2 14:32:39.986430 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 38268 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:39.991447 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:40.036938 systemd-logind[1534]: New session 10 of user core. Mar 2 14:32:40.041479 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 14:32:40.447026 sshd[4409]: Connection closed by 10.0.0.1 port 38268 Mar 2 14:32:40.447915 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:40.461295 systemd[1]: sshd@9-10.0.0.8:22-10.0.0.1:38268.service: Deactivated successfully. Mar 2 14:32:40.466309 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 14:32:40.468622 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. Mar 2 14:32:40.472545 systemd-logind[1534]: Removed session 10. Mar 2 14:32:40.498855 kubelet[2802]: E0302 14:32:40.498621 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:32:45.518899 systemd[1]: Started sshd@10-10.0.0.8:22-10.0.0.1:42232.service - OpenSSH per-connection server daemon (10.0.0.1:42232). Mar 2 14:32:45.718081 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 42232 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:45.732304 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:45.762841 systemd-logind[1534]: New session 11 of user core. Mar 2 14:32:45.782321 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 14:32:46.192867 sshd[4427]: Connection closed by 10.0.0.1 port 42232 Mar 2 14:32:46.197023 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:46.209932 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. Mar 2 14:32:46.212017 systemd[1]: sshd@10-10.0.0.8:22-10.0.0.1:42232.service: Deactivated successfully. Mar 2 14:32:46.220148 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 14:32:46.228093 systemd-logind[1534]: Removed session 11. Mar 2 14:32:48.506358 kubelet[2802]: E0302 14:32:48.506261 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:32:51.254977 systemd[1]: Started sshd@11-10.0.0.8:22-10.0.0.1:48306.service - OpenSSH per-connection server daemon (10.0.0.1:48306). Mar 2 14:32:51.436198 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 48306 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:51.438062 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:51.478785 systemd-logind[1534]: New session 12 of user core. Mar 2 14:32:51.500111 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 14:32:51.879282 sshd[4447]: Connection closed by 10.0.0.1 port 48306 Mar 2 14:32:51.882001 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:51.911320 systemd[1]: sshd@11-10.0.0.8:22-10.0.0.1:48306.service: Deactivated successfully. Mar 2 14:32:51.923891 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 14:32:51.940591 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. Mar 2 14:32:51.959424 systemd-logind[1534]: Removed session 12. Mar 2 14:32:56.939148 systemd[1]: Started sshd@12-10.0.0.8:22-10.0.0.1:48310.service - OpenSSH per-connection server daemon (10.0.0.1:48310). Mar 2 14:32:57.239450 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 48310 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:57.245990 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:57.272655 systemd-logind[1534]: New session 13 of user core. Mar 2 14:32:57.319007 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 14:32:57.847561 sshd[4464]: Connection closed by 10.0.0.1 port 48310 Mar 2 14:32:57.850901 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:57.871387 systemd[1]: sshd@12-10.0.0.8:22-10.0.0.1:48310.service: Deactivated successfully. Mar 2 14:32:57.882643 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 14:32:57.895632 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. Mar 2 14:32:57.917177 systemd-logind[1534]: Removed session 13. Mar 2 14:33:02.899083 systemd[1]: Started sshd@13-10.0.0.8:22-10.0.0.1:59338.service - OpenSSH per-connection server daemon (10.0.0.1:59338). Mar 2 14:33:03.120652 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 59338 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:03.127013 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:03.177229 systemd-logind[1534]: New session 14 of user core. Mar 2 14:33:03.191246 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 14:33:03.661042 sshd[4481]: Connection closed by 10.0.0.1 port 59338 Mar 2 14:33:03.662017 sshd-session[4478]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:03.673018 systemd[1]: sshd@13-10.0.0.8:22-10.0.0.1:59338.service: Deactivated successfully. Mar 2 14:33:03.687369 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 14:33:03.692589 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. Mar 2 14:33:03.698110 systemd-logind[1534]: Removed session 14. Mar 2 14:33:08.245218 containerd[1549]: time="2026-03-02T14:33:08.242202007Z" level=warning msg="container event discarded" container=83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1 type=CONTAINER_CREATED_EVENT Mar 2 14:33:08.725051 systemd[1]: Started sshd@14-10.0.0.8:22-10.0.0.1:59346.service - OpenSSH per-connection server daemon (10.0.0.1:59346). Mar 2 14:33:08.878464 sshd[4498]: Accepted publickey for core from 10.0.0.1 port 59346 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:08.879948 sshd-session[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:08.921899 systemd-logind[1534]: New session 15 of user core. Mar 2 14:33:08.936611 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 14:33:09.268913 containerd[1549]: time="2026-03-02T14:33:09.268596111Z" level=warning msg="container event discarded" container=83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1 type=CONTAINER_STARTED_EVENT Mar 2 14:33:09.279593 sshd[4501]: Connection closed by 10.0.0.1 port 59346 Mar 2 14:33:09.281012 sshd-session[4498]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:09.294127 systemd[1]: sshd@14-10.0.0.8:22-10.0.0.1:59346.service: Deactivated successfully. Mar 2 14:33:09.299649 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 14:33:09.307453 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. Mar 2 14:33:09.310637 systemd-logind[1534]: Removed session 15. Mar 2 14:33:10.076477 containerd[1549]: time="2026-03-02T14:33:10.076395479Z" level=warning msg="container event discarded" container=83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1 type=CONTAINER_STOPPED_EVENT Mar 2 14:33:11.085249 containerd[1549]: time="2026-03-02T14:33:11.085057004Z" level=warning msg="container event discarded" container=19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae type=CONTAINER_CREATED_EVENT Mar 2 14:33:11.659799 containerd[1549]: time="2026-03-02T14:33:11.659598791Z" level=warning msg="container event discarded" container=19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae type=CONTAINER_STARTED_EVENT Mar 2 14:33:12.474268 containerd[1549]: time="2026-03-02T14:33:12.473384002Z" level=warning msg="container event discarded" container=19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae type=CONTAINER_STOPPED_EVENT Mar 2 14:33:13.335754 containerd[1549]: time="2026-03-02T14:33:13.332872243Z" level=warning msg="container event discarded" container=d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec type=CONTAINER_CREATED_EVENT Mar 2 14:33:14.092494 containerd[1549]: time="2026-03-02T14:33:14.092125609Z" level=warning msg="container event discarded" container=d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec type=CONTAINER_STARTED_EVENT Mar 2 14:33:14.330020 systemd[1]: Started sshd@15-10.0.0.8:22-10.0.0.1:37014.service - OpenSSH per-connection server daemon (10.0.0.1:37014). Mar 2 14:33:14.507483 sshd[4515]: Accepted publickey for core from 10.0.0.1 port 37014 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:14.517249 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:14.556421 systemd-logind[1534]: New session 16 of user core. Mar 2 14:33:14.593558 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 14:33:14.765432 containerd[1549]: time="2026-03-02T14:33:14.764116080Z" level=warning msg="container event discarded" container=d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec type=CONTAINER_STOPPED_EVENT Mar 2 14:33:15.086838 sshd[4518]: Connection closed by 10.0.0.1 port 37014 Mar 2 14:33:15.090303 sshd-session[4515]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:15.105481 systemd[1]: sshd@15-10.0.0.8:22-10.0.0.1:37014.service: Deactivated successfully. Mar 2 14:33:15.110788 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 14:33:15.120310 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. Mar 2 14:33:15.126943 systemd-logind[1534]: Removed session 16. Mar 2 14:33:15.606034 containerd[1549]: time="2026-03-02T14:33:15.605874760Z" level=warning msg="container event discarded" container=12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e type=CONTAINER_CREATED_EVENT Mar 2 14:33:16.041423 containerd[1549]: time="2026-03-02T14:33:16.040943142Z" level=warning msg="container event discarded" container=12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e type=CONTAINER_STARTED_EVENT Mar 2 14:33:16.469967 containerd[1549]: time="2026-03-02T14:33:16.465497358Z" level=warning msg="container event discarded" container=12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e type=CONTAINER_STOPPED_EVENT Mar 2 14:33:17.583112 containerd[1549]: time="2026-03-02T14:33:17.582798122Z" level=warning msg="container event discarded" container=a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5 type=CONTAINER_CREATED_EVENT Mar 2 14:33:17.993120 containerd[1549]: time="2026-03-02T14:33:17.992963038Z" level=warning msg="container event discarded" container=45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647 type=CONTAINER_CREATED_EVENT Mar 2 14:33:18.163845 containerd[1549]: time="2026-03-02T14:33:18.161482780Z" level=warning msg="container event discarded" container=a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5 type=CONTAINER_STARTED_EVENT Mar 2 14:33:18.772326 containerd[1549]: time="2026-03-02T14:33:18.772103617Z" level=warning msg="container event discarded" container=45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647 type=CONTAINER_STARTED_EVENT Mar 2 14:33:20.133496 systemd[1]: Started sshd@16-10.0.0.8:22-10.0.0.1:37820.service - OpenSSH per-connection server daemon (10.0.0.1:37820). Mar 2 14:33:20.344219 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 37820 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:20.351489 sshd-session[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:20.421461 systemd-logind[1534]: New session 17 of user core. Mar 2 14:33:20.450084 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 14:33:21.015195 sshd[4538]: Connection closed by 10.0.0.1 port 37820 Mar 2 14:33:21.017011 sshd-session[4535]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:21.042105 systemd[1]: sshd@16-10.0.0.8:22-10.0.0.1:37820.service: Deactivated successfully. Mar 2 14:33:21.060531 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 14:33:21.069873 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. Mar 2 14:33:21.081077 systemd-logind[1534]: Removed session 17. Mar 2 14:33:26.051065 systemd[1]: Started sshd@17-10.0.0.8:22-10.0.0.1:37836.service - OpenSSH per-connection server daemon (10.0.0.1:37836). Mar 2 14:33:26.234206 sshd[4552]: Accepted publickey for core from 10.0.0.1 port 37836 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:26.239826 sshd-session[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:26.279315 systemd-logind[1534]: New session 18 of user core. Mar 2 14:33:26.302609 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 14:33:26.493485 kubelet[2802]: E0302 14:33:26.491241 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:33:26.711238 sshd[4555]: Connection closed by 10.0.0.1 port 37836 Mar 2 14:33:26.713468 sshd-session[4552]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:26.731619 systemd[1]: sshd@17-10.0.0.8:22-10.0.0.1:37836.service: Deactivated successfully. Mar 2 14:33:26.735379 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 14:33:26.742560 systemd[1]: Started sshd@18-10.0.0.8:22-10.0.0.1:37850.service - OpenSSH per-connection server daemon (10.0.0.1:37850). Mar 2 14:33:26.742814 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. Mar 2 14:33:26.751617 systemd-logind[1534]: Removed session 18. Mar 2 14:33:26.931176 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 37850 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:26.938805 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:26.984921 systemd-logind[1534]: New session 19 of user core. Mar 2 14:33:27.016343 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 14:33:27.607816 sshd[4572]: Connection closed by 10.0.0.1 port 37850 Mar 2 14:33:27.612567 sshd-session[4569]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:27.638615 systemd[1]: sshd@18-10.0.0.8:22-10.0.0.1:37850.service: Deactivated successfully. Mar 2 14:33:27.648239 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 14:33:27.659838 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. Mar 2 14:33:27.676225 systemd[1]: Started sshd@19-10.0.0.8:22-10.0.0.1:37862.service - OpenSSH per-connection server daemon (10.0.0.1:37862). Mar 2 14:33:27.691136 systemd-logind[1534]: Removed session 19. Mar 2 14:33:27.848592 sshd[4584]: Accepted publickey for core from 10.0.0.1 port 37862 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:27.854952 sshd-session[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:27.879158 systemd-logind[1534]: New session 20 of user core. Mar 2 14:33:27.895468 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 14:33:28.326799 sshd[4587]: Connection closed by 10.0.0.1 port 37862 Mar 2 14:33:28.327544 sshd-session[4584]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:28.348931 systemd[1]: sshd@19-10.0.0.8:22-10.0.0.1:37862.service: Deactivated successfully. Mar 2 14:33:28.358754 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 14:33:28.362595 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. Mar 2 14:33:28.372242 systemd-logind[1534]: Removed session 20. Mar 2 14:33:33.360218 systemd[1]: Started sshd@20-10.0.0.8:22-10.0.0.1:39504.service - OpenSSH per-connection server daemon (10.0.0.1:39504). Mar 2 14:33:33.547813 sshd[4601]: Accepted publickey for core from 10.0.0.1 port 39504 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:33.556334 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:33.579589 systemd-logind[1534]: New session 21 of user core. Mar 2 14:33:33.603266 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 14:33:33.965028 sshd[4604]: Connection closed by 10.0.0.1 port 39504 Mar 2 14:33:33.967338 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:33.983094 systemd[1]: sshd@20-10.0.0.8:22-10.0.0.1:39504.service: Deactivated successfully. Mar 2 14:33:33.987233 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 14:33:33.995546 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. Mar 2 14:33:34.001805 systemd-logind[1534]: Removed session 21. Mar 2 14:33:36.495065 kubelet[2802]: E0302 14:33:36.491592 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:33:38.499289 kubelet[2802]: E0302 14:33:38.498994 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:33:39.010652 systemd[1]: Started sshd@21-10.0.0.8:22-10.0.0.1:39516.service - OpenSSH per-connection server daemon (10.0.0.1:39516). Mar 2 14:33:39.131857 sshd[4618]: Accepted publickey for core from 10.0.0.1 port 39516 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:39.135215 sshd-session[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:39.151439 systemd-logind[1534]: New session 22 of user core. Mar 2 14:33:39.159337 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 14:33:39.485175 sshd[4621]: Connection closed by 10.0.0.1 port 39516 Mar 2 14:33:39.485802 sshd-session[4618]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:39.501759 systemd[1]: sshd@21-10.0.0.8:22-10.0.0.1:39516.service: Deactivated successfully. Mar 2 14:33:39.505902 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 14:33:39.513901 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. Mar 2 14:33:39.519540 systemd-logind[1534]: Removed session 22. Mar 2 14:33:40.493822 kubelet[2802]: E0302 14:33:40.493151 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:33:44.514958 systemd[1]: Started sshd@22-10.0.0.8:22-10.0.0.1:54968.service - OpenSSH per-connection server daemon (10.0.0.1:54968). Mar 2 14:33:44.671180 sshd[4635]: Accepted publickey for core from 10.0.0.1 port 54968 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:44.674208 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:44.685989 systemd-logind[1534]: New session 23 of user core. Mar 2 14:33:44.705225 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 14:33:44.969498 sshd[4638]: Connection closed by 10.0.0.1 port 54968 Mar 2 14:33:44.971830 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:44.985781 systemd[1]: sshd@22-10.0.0.8:22-10.0.0.1:54968.service: Deactivated successfully. Mar 2 14:33:44.990470 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 14:33:44.995981 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. Mar 2 14:33:45.001078 systemd-logind[1534]: Removed session 23. Mar 2 14:33:46.494147 kubelet[2802]: E0302 14:33:46.493471 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:33:50.013954 systemd[1]: Started sshd@23-10.0.0.8:22-10.0.0.1:54976.service - OpenSSH per-connection server daemon (10.0.0.1:54976). Mar 2 14:33:50.132129 sshd[4653]: Accepted publickey for core from 10.0.0.1 port 54976 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:50.134389 sshd-session[4653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:50.149487 systemd-logind[1534]: New session 24 of user core. Mar 2 14:33:50.155962 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 14:33:50.359944 sshd[4656]: Connection closed by 10.0.0.1 port 54976 Mar 2 14:33:50.362210 sshd-session[4653]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:50.374644 systemd[1]: sshd@23-10.0.0.8:22-10.0.0.1:54976.service: Deactivated successfully. Mar 2 14:33:50.379201 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 14:33:50.385527 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. Mar 2 14:33:50.394474 systemd-logind[1534]: Removed session 24. Mar 2 14:33:55.381161 systemd[1]: Started sshd@24-10.0.0.8:22-10.0.0.1:53460.service - OpenSSH per-connection server daemon (10.0.0.1:53460). Mar 2 14:33:55.486360 sshd[4670]: Accepted publickey for core from 10.0.0.1 port 53460 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:55.489852 sshd-session[4670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:55.500925 systemd-logind[1534]: New session 25 of user core. Mar 2 14:33:55.511221 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 14:33:55.709119 sshd[4673]: Connection closed by 10.0.0.1 port 53460 Mar 2 14:33:55.707225 sshd-session[4670]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:55.725823 systemd[1]: sshd@24-10.0.0.8:22-10.0.0.1:53460.service: Deactivated successfully. Mar 2 14:33:55.729506 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 14:33:55.736090 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. Mar 2 14:33:55.739407 systemd[1]: Started sshd@25-10.0.0.8:22-10.0.0.1:53468.service - OpenSSH per-connection server daemon (10.0.0.1:53468). Mar 2 14:33:55.742382 systemd-logind[1534]: Removed session 25. Mar 2 14:33:55.831270 sshd[4686]: Accepted publickey for core from 10.0.0.1 port 53468 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:55.834383 sshd-session[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:55.850832 systemd-logind[1534]: New session 26 of user core. Mar 2 14:33:55.868463 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 14:33:56.437376 sshd[4689]: Connection closed by 10.0.0.1 port 53468 Mar 2 14:33:56.441558 sshd-session[4686]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:56.458508 systemd[1]: sshd@25-10.0.0.8:22-10.0.0.1:53468.service: Deactivated successfully. Mar 2 14:33:56.462451 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 14:33:56.465122 systemd-logind[1534]: Session 26 logged out. Waiting for processes to exit. Mar 2 14:33:56.472086 systemd[1]: Started sshd@26-10.0.0.8:22-10.0.0.1:53474.service - OpenSSH per-connection server daemon (10.0.0.1:53474). Mar 2 14:33:56.476049 systemd-logind[1534]: Removed session 26. Mar 2 14:33:56.492226 kubelet[2802]: E0302 14:33:56.492077 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:33:56.603782 sshd[4702]: Accepted publickey for core from 10.0.0.1 port 53474 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:56.606588 sshd-session[4702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:56.617199 systemd-logind[1534]: New session 27 of user core. Mar 2 14:33:56.625063 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 14:33:57.607525 sshd[4706]: Connection closed by 10.0.0.1 port 53474 Mar 2 14:33:57.608089 sshd-session[4702]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:57.632869 systemd[1]: sshd@26-10.0.0.8:22-10.0.0.1:53474.service: Deactivated successfully. Mar 2 14:33:57.646978 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 14:33:57.658931 systemd-logind[1534]: Session 27 logged out. Waiting for processes to exit. Mar 2 14:33:57.662087 systemd[1]: Started sshd@27-10.0.0.8:22-10.0.0.1:53480.service - OpenSSH per-connection server daemon (10.0.0.1:53480). Mar 2 14:33:57.674647 systemd-logind[1534]: Removed session 27. Mar 2 14:33:57.763889 sshd[4727]: Accepted publickey for core from 10.0.0.1 port 53480 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:57.767383 sshd-session[4727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:57.775184 containerd[1549]: time="2026-03-02T14:33:57.775043739Z" level=warning msg="container event discarded" container=76307d65fec00197b27c00d822e34709e7d37d39dcd5c3fbe46fdfce1a56b7c9 type=CONTAINER_CREATED_EVENT Mar 2 14:33:57.775184 containerd[1549]: time="2026-03-02T14:33:57.775174782Z" level=warning msg="container event discarded" container=76307d65fec00197b27c00d822e34709e7d37d39dcd5c3fbe46fdfce1a56b7c9 type=CONTAINER_STARTED_EVENT Mar 2 14:33:57.787163 systemd-logind[1534]: New session 28 of user core. Mar 2 14:33:57.803429 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 2 14:33:57.928520 containerd[1549]: time="2026-03-02T14:33:57.927990275Z" level=warning msg="container event discarded" container=354e3fdbe43a0e7d2fd5c4f9177c38357a7a860805d27038a9085e06367cb712 type=CONTAINER_CREATED_EVENT Mar 2 14:33:57.951916 containerd[1549]: time="2026-03-02T14:33:57.951781688Z" level=warning msg="container event discarded" container=354e3fdbe43a0e7d2fd5c4f9177c38357a7a860805d27038a9085e06367cb712 type=CONTAINER_STARTED_EVENT Mar 2 14:33:58.126038 containerd[1549]: time="2026-03-02T14:33:58.125953946Z" level=warning msg="container event discarded" container=a8c0375ea6144104dc4910059985fb915d97436bfebf997e0f88c0ea16b2c22f type=CONTAINER_CREATED_EVENT Mar 2 14:33:58.253632 containerd[1549]: time="2026-03-02T14:33:58.253107863Z" level=warning msg="container event discarded" container=e4ebd97868793d9e58a5c8096cc975441a8d29cf18c71a19e73e20a083a382c9 type=CONTAINER_CREATED_EVENT Mar 2 14:33:58.350506 sshd[4730]: Connection closed by 10.0.0.1 port 53480 Mar 2 14:33:58.352933 sshd-session[4727]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:58.368476 systemd[1]: sshd@27-10.0.0.8:22-10.0.0.1:53480.service: Deactivated successfully. Mar 2 14:33:58.372143 systemd[1]: session-28.scope: Deactivated successfully. Mar 2 14:33:58.375767 systemd-logind[1534]: Session 28 logged out. Waiting for processes to exit. Mar 2 14:33:58.385427 systemd[1]: Started sshd@28-10.0.0.8:22-10.0.0.1:53484.service - OpenSSH per-connection server daemon (10.0.0.1:53484). Mar 2 14:33:58.388526 systemd-logind[1534]: Removed session 28. Mar 2 14:33:58.487114 sshd[4742]: Accepted publickey for core from 10.0.0.1 port 53484 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:58.491031 sshd-session[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:58.515062 systemd-logind[1534]: New session 29 of user core. Mar 2 14:33:58.533115 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 2 14:33:58.668901 containerd[1549]: time="2026-03-02T14:33:58.668838606Z" level=warning msg="container event discarded" container=a8c0375ea6144104dc4910059985fb915d97436bfebf997e0f88c0ea16b2c22f type=CONTAINER_STARTED_EVENT Mar 2 14:33:58.752101 sshd[4745]: Connection closed by 10.0.0.1 port 53484 Mar 2 14:33:58.754072 sshd-session[4742]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:58.763477 systemd[1]: sshd@28-10.0.0.8:22-10.0.0.1:53484.service: Deactivated successfully. Mar 2 14:33:58.767965 systemd[1]: session-29.scope: Deactivated successfully. Mar 2 14:33:58.774050 systemd-logind[1534]: Session 29 logged out. Waiting for processes to exit. Mar 2 14:33:58.780101 systemd-logind[1534]: Removed session 29. Mar 2 14:33:58.868587 containerd[1549]: time="2026-03-02T14:33:58.868510280Z" level=warning msg="container event discarded" container=e4ebd97868793d9e58a5c8096cc975441a8d29cf18c71a19e73e20a083a382c9 type=CONTAINER_STARTED_EVENT Mar 2 14:34:03.783863 systemd[1]: Started sshd@29-10.0.0.8:22-10.0.0.1:46686.service - OpenSSH per-connection server daemon (10.0.0.1:46686). Mar 2 14:34:03.901272 sshd[4760]: Accepted publickey for core from 10.0.0.1 port 46686 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:34:03.905878 sshd-session[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:34:03.929386 systemd-logind[1534]: New session 30 of user core. Mar 2 14:34:03.939993 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 2 14:34:04.248491 sshd[4763]: Connection closed by 10.0.0.1 port 46686 Mar 2 14:34:04.249108 sshd-session[4760]: pam_unix(sshd:session): session closed for user core Mar 2 14:34:04.257064 systemd[1]: sshd@29-10.0.0.8:22-10.0.0.1:46686.service: Deactivated successfully. Mar 2 14:34:04.264819 systemd[1]: session-30.scope: Deactivated successfully. Mar 2 14:34:04.273556 systemd-logind[1534]: Session 30 logged out. Waiting for processes to exit. Mar 2 14:34:04.289972 systemd-logind[1534]: Removed session 30. Mar 2 14:34:04.497096 kubelet[2802]: E0302 14:34:04.496634 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:07.492157 kubelet[2802]: E0302 14:34:07.492111 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:09.269306 systemd[1]: Started sshd@30-10.0.0.8:22-10.0.0.1:46700.service - OpenSSH per-connection server daemon (10.0.0.1:46700). Mar 2 14:34:09.364322 sshd[4780]: Accepted publickey for core from 10.0.0.1 port 46700 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:34:09.366639 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:34:09.379915 systemd-logind[1534]: New session 31 of user core. Mar 2 14:34:09.390462 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 2 14:34:09.575474 sshd[4783]: Connection closed by 10.0.0.1 port 46700 Mar 2 14:34:09.574786 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Mar 2 14:34:09.585136 systemd[1]: sshd@30-10.0.0.8:22-10.0.0.1:46700.service: Deactivated successfully. Mar 2 14:34:09.589301 systemd[1]: session-31.scope: Deactivated successfully. Mar 2 14:34:09.591991 systemd-logind[1534]: Session 31 logged out. Waiting for processes to exit. Mar 2 14:34:09.595857 systemd-logind[1534]: Removed session 31. Mar 2 14:34:14.597278 systemd[1]: Started sshd@31-10.0.0.8:22-10.0.0.1:57796.service - OpenSSH per-connection server daemon (10.0.0.1:57796). Mar 2 14:34:14.687280 sshd[4797]: Accepted publickey for core from 10.0.0.1 port 57796 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:34:14.690588 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:34:14.704460 systemd-logind[1534]: New session 32 of user core. Mar 2 14:34:14.715087 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 2 14:34:14.912322 sshd[4800]: Connection closed by 10.0.0.1 port 57796 Mar 2 14:34:14.914000 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Mar 2 14:34:14.923359 systemd[1]: sshd@31-10.0.0.8:22-10.0.0.1:57796.service: Deactivated successfully. Mar 2 14:34:14.930552 systemd[1]: session-32.scope: Deactivated successfully. Mar 2 14:34:14.935487 systemd-logind[1534]: Session 32 logged out. Waiting for processes to exit. Mar 2 14:34:14.949490 systemd-logind[1534]: Removed session 32. Mar 2 14:34:19.952882 systemd[1]: Started sshd@32-10.0.0.8:22-10.0.0.1:57802.service - OpenSSH per-connection server daemon (10.0.0.1:57802). Mar 2 14:34:20.148526 sshd[4815]: Accepted publickey for core from 10.0.0.1 port 57802 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:34:20.153344 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:34:20.165058 systemd-logind[1534]: New session 33 of user core. Mar 2 14:34:20.185285 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 2 14:34:20.561870 sshd[4818]: Connection closed by 10.0.0.1 port 57802 Mar 2 14:34:20.562613 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Mar 2 14:34:20.576303 systemd[1]: sshd@32-10.0.0.8:22-10.0.0.1:57802.service: Deactivated successfully. Mar 2 14:34:20.586842 systemd[1]: session-33.scope: Deactivated successfully. Mar 2 14:34:20.617818 systemd-logind[1534]: Session 33 logged out. Waiting for processes to exit. Mar 2 14:34:20.629084 systemd[1]: Started sshd@33-10.0.0.8:22-10.0.0.1:53884.service - OpenSSH per-connection server daemon (10.0.0.1:53884). Mar 2 14:34:20.640950 systemd-logind[1534]: Removed session 33. Mar 2 14:34:20.780815 sshd[4831]: Accepted publickey for core from 10.0.0.1 port 53884 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:34:20.783436 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:34:20.823395 systemd-logind[1534]: New session 34 of user core. Mar 2 14:34:20.832033 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 2 14:34:22.876558 containerd[1549]: time="2026-03-02T14:34:22.876503901Z" level=info msg="StopContainer for \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\" with timeout 30 (s)" Mar 2 14:34:22.910451 containerd[1549]: time="2026-03-02T14:34:22.910321160Z" level=info msg="Stop container \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\" with signal terminated" Mar 2 14:34:22.980611 systemd[1]: cri-containerd-45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647.scope: Deactivated successfully. Mar 2 14:34:22.981333 systemd[1]: cri-containerd-45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647.scope: Consumed 2.499s CPU time, 26.1M memory peak, 4K written to disk. Mar 2 14:34:23.018235 containerd[1549]: time="2026-03-02T14:34:23.018166026Z" level=info msg="received container exit event container_id:\"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\" id:\"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\" pid:3430 exited_at:{seconds:1772462063 nanos:8260786}" Mar 2 14:34:23.038615 containerd[1549]: time="2026-03-02T14:34:23.038488631Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 14:34:23.055225 containerd[1549]: time="2026-03-02T14:34:23.054918882Z" level=info msg="StopContainer for \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\" with timeout 2 (s)" Mar 2 14:34:23.056523 containerd[1549]: time="2026-03-02T14:34:23.056034654Z" level=info msg="Stop container \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\" with signal terminated" Mar 2 14:34:23.086004 systemd-networkd[1452]: lxc_health: Link DOWN Mar 2 14:34:23.086017 systemd-networkd[1452]: lxc_health: Lost carrier Mar 2 14:34:23.151507 systemd[1]: cri-containerd-a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5.scope: Deactivated successfully. Mar 2 14:34:23.154555 systemd[1]: cri-containerd-a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5.scope: Consumed 24.518s CPU time, 128.3M memory peak, 319K read from disk, 13.3M written to disk. Mar 2 14:34:23.166955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647-rootfs.mount: Deactivated successfully. Mar 2 14:34:23.169168 containerd[1549]: time="2026-03-02T14:34:23.167153110Z" level=info msg="received container exit event container_id:\"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\" id:\"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\" pid:3399 exited_at:{seconds:1772462063 nanos:165245199}" Mar 2 14:34:23.223517 containerd[1549]: time="2026-03-02T14:34:23.223470714Z" level=info msg="StopContainer for \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\" returns successfully" Mar 2 14:34:23.238555 containerd[1549]: time="2026-03-02T14:34:23.238519628Z" level=info msg="StopPodSandbox for \"9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99\"" Mar 2 14:34:23.239003 containerd[1549]: time="2026-03-02T14:34:23.238855030Z" level=info msg="Container to stop \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 14:34:23.257308 systemd[1]: cri-containerd-9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99.scope: Deactivated successfully. Mar 2 14:34:23.262871 containerd[1549]: time="2026-03-02T14:34:23.262644727Z" level=info msg="received sandbox exit event container_id:\"9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99\" id:\"9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99\" exit_status:137 exited_at:{seconds:1772462063 nanos:261924826}" monitor_name=podsandbox Mar 2 14:34:23.284927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5-rootfs.mount: Deactivated successfully. Mar 2 14:34:23.328215 containerd[1549]: time="2026-03-02T14:34:23.327833668Z" level=info msg="StopContainer for \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\" returns successfully" Mar 2 14:34:23.332854 containerd[1549]: time="2026-03-02T14:34:23.332824656Z" level=info msg="StopPodSandbox for \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\"" Mar 2 14:34:23.333024 containerd[1549]: time="2026-03-02T14:34:23.333003337Z" level=info msg="Container to stop \"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 14:34:23.333197 containerd[1549]: time="2026-03-02T14:34:23.333174915Z" level=info msg="Container to stop \"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 14:34:23.333273 containerd[1549]: time="2026-03-02T14:34:23.333257619Z" level=info msg="Container to stop \"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 14:34:23.333339 containerd[1549]: time="2026-03-02T14:34:23.333324333Z" level=info msg="Container to stop \"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 14:34:23.333401 containerd[1549]: time="2026-03-02T14:34:23.333386528Z" level=info msg="Container to stop \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 14:34:23.372605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99-rootfs.mount: Deactivated successfully. Mar 2 14:34:23.383204 systemd[1]: cri-containerd-f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34.scope: Deactivated successfully. Mar 2 14:34:23.387039 containerd[1549]: time="2026-03-02T14:34:23.386989873Z" level=info msg="received sandbox exit event container_id:\"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" id:\"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" exit_status:137 exited_at:{seconds:1772462063 nanos:386448153}" monitor_name=podsandbox Mar 2 14:34:23.405488 containerd[1549]: time="2026-03-02T14:34:23.396565336Z" level=info msg="shim disconnected" id=9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99 namespace=k8s.io Mar 2 14:34:23.405488 containerd[1549]: time="2026-03-02T14:34:23.396588968Z" level=warning msg="cleaning up after shim disconnected" id=9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99 namespace=k8s.io Mar 2 14:34:23.405488 containerd[1549]: time="2026-03-02T14:34:23.396598366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 14:34:23.513906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99-shm.mount: Deactivated successfully. Mar 2 14:34:23.523415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34-rootfs.mount: Deactivated successfully. Mar 2 14:34:23.525378 containerd[1549]: time="2026-03-02T14:34:23.525207653Z" level=info msg="received sandbox container exit event sandbox_id:\"9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99\" exit_status:137 exited_at:{seconds:1772462063 nanos:261924826}" monitor_name=criService Mar 2 14:34:23.548162 containerd[1549]: time="2026-03-02T14:34:23.546468601Z" level=info msg="shim disconnected" id=f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34 namespace=k8s.io Mar 2 14:34:23.548162 containerd[1549]: time="2026-03-02T14:34:23.546506381Z" level=warning msg="cleaning up after shim disconnected" id=f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34 namespace=k8s.io Mar 2 14:34:23.548162 containerd[1549]: time="2026-03-02T14:34:23.546516940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 14:34:23.583533 containerd[1549]: time="2026-03-02T14:34:23.580019494Z" level=info msg="TearDown network for sandbox \"9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99\" successfully" Mar 2 14:34:23.584170 containerd[1549]: time="2026-03-02T14:34:23.583827786Z" level=info msg="StopPodSandbox for \"9c646d85afd05d7bd91a6c2a5d480756e8a9f91b7ba8002ca0b4b3e432548f99\" returns successfully" Mar 2 14:34:23.587030 containerd[1549]: time="2026-03-02T14:34:23.586852717Z" level=info msg="received sandbox container exit event sandbox_id:\"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" exit_status:137 exited_at:{seconds:1772462063 nanos:386448153}" monitor_name=criService Mar 2 14:34:23.590452 containerd[1549]: time="2026-03-02T14:34:23.590321230Z" level=info msg="TearDown network for sandbox \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" successfully" Mar 2 14:34:23.590452 containerd[1549]: time="2026-03-02T14:34:23.590361745Z" level=info msg="StopPodSandbox for \"f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34\" returns successfully" Mar 2 14:34:23.826386 kubelet[2802]: I0302 14:34:23.826058 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-hostproc\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.826386 kubelet[2802]: I0302 14:34:23.826270 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv2jz\" (UniqueName: \"kubernetes.io/projected/c7e30928-0e34-488f-826e-eefe5d9bc161-kube-api-access-dv2jz\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.826386 kubelet[2802]: I0302 14:34:23.826296 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-xtables-lock\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.826386 kubelet[2802]: I0302 14:34:23.826318 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-cilium-run\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.826386 kubelet[2802]: I0302 14:34:23.826340 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-etc-cni-netd\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.826386 kubelet[2802]: I0302 14:34:23.826362 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-host-proc-sys-net\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.827407 kubelet[2802]: I0302 14:34:23.826382 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-cilium-cgroup\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.827407 kubelet[2802]: I0302 14:34:23.826406 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7e30928-0e34-488f-826e-eefe5d9bc161-clustermesh-secrets\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.827407 kubelet[2802]: I0302 14:34:23.826427 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b5c9\" (UniqueName: \"kubernetes.io/projected/a8b6e507-71c8-4023-90dc-a9e9a453dfd8-kube-api-access-5b5c9\") pod \"a8b6e507-71c8-4023-90dc-a9e9a453dfd8\" (UID: \"a8b6e507-71c8-4023-90dc-a9e9a453dfd8\") " Mar 2 14:34:23.827407 kubelet[2802]: I0302 14:34:23.826422 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 14:34:23.827407 kubelet[2802]: I0302 14:34:23.826452 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7e30928-0e34-488f-826e-eefe5d9bc161-cilium-config-path\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.827407 kubelet[2802]: I0302 14:34:23.826470 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-lib-modules\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.827635 kubelet[2802]: I0302 14:34:23.826480 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 14:34:23.827635 kubelet[2802]: I0302 14:34:23.826491 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-cni-path\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.827635 kubelet[2802]: I0302 14:34:23.826505 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 14:34:23.827635 kubelet[2802]: I0302 14:34:23.826517 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7e30928-0e34-488f-826e-eefe5d9bc161-hubble-tls\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.827635 kubelet[2802]: I0302 14:34:23.826525 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 14:34:23.827977 kubelet[2802]: I0302 14:34:23.826548 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-hostproc" (OuterVolumeSpecName: "hostproc") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 14:34:23.827977 kubelet[2802]: I0302 14:34:23.826567 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-bpf-maps\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.827977 kubelet[2802]: I0302 14:34:23.826591 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-host-proc-sys-kernel\") pod \"c7e30928-0e34-488f-826e-eefe5d9bc161\" (UID: \"c7e30928-0e34-488f-826e-eefe5d9bc161\") " Mar 2 14:34:23.827977 kubelet[2802]: I0302 14:34:23.826617 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8b6e507-71c8-4023-90dc-a9e9a453dfd8-cilium-config-path\") pod \"a8b6e507-71c8-4023-90dc-a9e9a453dfd8\" (UID: \"a8b6e507-71c8-4023-90dc-a9e9a453dfd8\") " Mar 2 14:34:23.827977 kubelet[2802]: I0302 14:34:23.826779 2802 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.827977 kubelet[2802]: I0302 14:34:23.826800 2802 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.827977 kubelet[2802]: I0302 14:34:23.826814 2802 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.829581 kubelet[2802]: I0302 14:34:23.826825 2802 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.829581 kubelet[2802]: I0302 14:34:23.826837 2802 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.829581 kubelet[2802]: I0302 14:34:23.828329 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 14:34:23.829581 kubelet[2802]: I0302 14:34:23.828358 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-cni-path" (OuterVolumeSpecName: "cni-path") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 14:34:23.838229 kubelet[2802]: I0302 14:34:23.835586 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 14:34:23.838229 kubelet[2802]: I0302 14:34:23.835816 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 14:34:23.840060 kubelet[2802]: I0302 14:34:23.840033 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e30928-0e34-488f-826e-eefe5d9bc161-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 14:34:23.842402 kubelet[2802]: I0302 14:34:23.842375 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 14:34:23.845599 kubelet[2802]: I0302 14:34:23.845555 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b6e507-71c8-4023-90dc-a9e9a453dfd8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a8b6e507-71c8-4023-90dc-a9e9a453dfd8" (UID: "a8b6e507-71c8-4023-90dc-a9e9a453dfd8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 14:34:23.847185 kubelet[2802]: I0302 14:34:23.847037 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e30928-0e34-488f-826e-eefe5d9bc161-kube-api-access-dv2jz" (OuterVolumeSpecName: "kube-api-access-dv2jz") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "kube-api-access-dv2jz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 14:34:23.847272 kubelet[2802]: I0302 14:34:23.847212 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b6e507-71c8-4023-90dc-a9e9a453dfd8-kube-api-access-5b5c9" (OuterVolumeSpecName: "kube-api-access-5b5c9") pod "a8b6e507-71c8-4023-90dc-a9e9a453dfd8" (UID: "a8b6e507-71c8-4023-90dc-a9e9a453dfd8"). InnerVolumeSpecName "kube-api-access-5b5c9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 14:34:23.847272 kubelet[2802]: I0302 14:34:23.846810 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7e30928-0e34-488f-826e-eefe5d9bc161-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 14:34:23.850389 kubelet[2802]: I0302 14:34:23.850273 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e30928-0e34-488f-826e-eefe5d9bc161-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c7e30928-0e34-488f-826e-eefe5d9bc161" (UID: "c7e30928-0e34-488f-826e-eefe5d9bc161"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 14:34:23.928654 kubelet[2802]: I0302 14:34:23.927980 2802 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8b6e507-71c8-4023-90dc-a9e9a453dfd8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.928654 kubelet[2802]: I0302 14:34:23.928773 2802 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dv2jz\" (UniqueName: \"kubernetes.io/projected/c7e30928-0e34-488f-826e-eefe5d9bc161-kube-api-access-dv2jz\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.928654 kubelet[2802]: I0302 14:34:23.928800 2802 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.928654 kubelet[2802]: I0302 14:34:23.928813 2802 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7e30928-0e34-488f-826e-eefe5d9bc161-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.929192 kubelet[2802]: I0302 14:34:23.928825 2802 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5b5c9\" (UniqueName: \"kubernetes.io/projected/a8b6e507-71c8-4023-90dc-a9e9a453dfd8-kube-api-access-5b5c9\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.929192 kubelet[2802]: I0302 14:34:23.928835 2802 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7e30928-0e34-488f-826e-eefe5d9bc161-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.929192 kubelet[2802]: I0302 14:34:23.928845 2802 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.929192 kubelet[2802]: I0302 14:34:23.928856 2802 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.929192 kubelet[2802]: I0302 14:34:23.928867 2802 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7e30928-0e34-488f-826e-eefe5d9bc161-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.929192 kubelet[2802]: I0302 14:34:23.928876 2802 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:23.929192 kubelet[2802]: I0302 14:34:23.928887 2802 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7e30928-0e34-488f-826e-eefe5d9bc161-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 2 14:34:24.159913 kubelet[2802]: I0302 14:34:24.158793 2802 scope.go:117] "RemoveContainer" containerID="45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647" Mar 2 14:34:24.163790 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6c9a86918f5951af9b9281d2f3658c0af82a183a66125c3fac86eded1130d34-shm.mount: Deactivated successfully. Mar 2 14:34:24.163928 systemd[1]: var-lib-kubelet-pods-a8b6e507\x2d71c8\x2d4023\x2d90dc\x2da9e9a453dfd8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5b5c9.mount: Deactivated successfully. Mar 2 14:34:24.164025 systemd[1]: var-lib-kubelet-pods-c7e30928\x2d0e34\x2d488f\x2d826e\x2deefe5d9bc161-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddv2jz.mount: Deactivated successfully. Mar 2 14:34:24.164216 systemd[1]: var-lib-kubelet-pods-c7e30928\x2d0e34\x2d488f\x2d826e\x2deefe5d9bc161-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 2 14:34:24.164315 systemd[1]: var-lib-kubelet-pods-c7e30928\x2d0e34\x2d488f\x2d826e\x2deefe5d9bc161-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 2 14:34:24.178589 systemd[1]: Removed slice kubepods-besteffort-poda8b6e507_71c8_4023_90dc_a9e9a453dfd8.slice - libcontainer container kubepods-besteffort-poda8b6e507_71c8_4023_90dc_a9e9a453dfd8.slice. Mar 2 14:34:24.178988 systemd[1]: kubepods-besteffort-poda8b6e507_71c8_4023_90dc_a9e9a453dfd8.slice: Consumed 2.579s CPU time, 26.4M memory peak, 4K written to disk. Mar 2 14:34:24.182627 containerd[1549]: time="2026-03-02T14:34:24.182595196Z" level=info msg="RemoveContainer for \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\"" Mar 2 14:34:24.196960 containerd[1549]: time="2026-03-02T14:34:24.196839576Z" level=info msg="RemoveContainer for \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\" returns successfully" Mar 2 14:34:24.197370 kubelet[2802]: I0302 14:34:24.197265 2802 scope.go:117] "RemoveContainer" containerID="45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647" Mar 2 14:34:24.212128 containerd[1549]: time="2026-03-02T14:34:24.199022538Z" level=error msg="ContainerStatus for \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\": not found" Mar 2 14:34:24.212470 kubelet[2802]: E0302 14:34:24.212441 2802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\": not found" containerID="45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647" Mar 2 14:34:24.212506 systemd[1]: Removed slice kubepods-burstable-podc7e30928_0e34_488f_826e_eefe5d9bc161.slice - libcontainer container kubepods-burstable-podc7e30928_0e34_488f_826e_eefe5d9bc161.slice. Mar 2 14:34:24.212647 systemd[1]: kubepods-burstable-podc7e30928_0e34_488f_826e_eefe5d9bc161.slice: Consumed 24.923s CPU time, 128.6M memory peak, 1M read from disk, 13.3M written to disk. Mar 2 14:34:24.212873 kubelet[2802]: I0302 14:34:24.212799 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647"} err="failed to get container status \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\": rpc error: code = NotFound desc = an error occurred when try to find container \"45c290ee64b0589e82fc5d76eaf7925b0e3bc818979db4b8fda45030a9904647\": not found" Mar 2 14:34:24.212873 kubelet[2802]: I0302 14:34:24.212849 2802 scope.go:117] "RemoveContainer" containerID="a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5" Mar 2 14:34:24.217310 containerd[1549]: time="2026-03-02T14:34:24.217051889Z" level=info msg="RemoveContainer for \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\"" Mar 2 14:34:24.235443 containerd[1549]: time="2026-03-02T14:34:24.234155206Z" level=info msg="RemoveContainer for \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\" returns successfully" Mar 2 14:34:24.235883 kubelet[2802]: I0302 14:34:24.234475 2802 scope.go:117] "RemoveContainer" containerID="12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e" Mar 2 14:34:24.240235 containerd[1549]: time="2026-03-02T14:34:24.239951250Z" level=info msg="RemoveContainer for \"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\"" Mar 2 14:34:24.265236 containerd[1549]: time="2026-03-02T14:34:24.264201756Z" level=info msg="RemoveContainer for \"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\" returns successfully" Mar 2 14:34:24.265432 kubelet[2802]: I0302 14:34:24.264524 2802 scope.go:117] "RemoveContainer" containerID="d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec" Mar 2 14:34:24.280996 containerd[1549]: time="2026-03-02T14:34:24.278407955Z" level=info msg="RemoveContainer for \"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\"" Mar 2 14:34:24.302443 containerd[1549]: time="2026-03-02T14:34:24.301626238Z" level=info msg="RemoveContainer for \"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\" returns successfully" Mar 2 14:34:24.305176 kubelet[2802]: I0302 14:34:24.302001 2802 scope.go:117] "RemoveContainer" containerID="19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae" Mar 2 14:34:24.313471 containerd[1549]: time="2026-03-02T14:34:24.313434157Z" level=info msg="RemoveContainer for \"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\"" Mar 2 14:34:24.332985 containerd[1549]: time="2026-03-02T14:34:24.332941838Z" level=info msg="RemoveContainer for \"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\" returns successfully" Mar 2 14:34:24.335227 kubelet[2802]: I0302 14:34:24.333901 2802 scope.go:117] "RemoveContainer" containerID="83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1" Mar 2 14:34:24.340420 containerd[1549]: time="2026-03-02T14:34:24.339517199Z" level=info msg="RemoveContainer for \"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\"" Mar 2 14:34:24.376299 containerd[1549]: time="2026-03-02T14:34:24.375054484Z" level=info msg="RemoveContainer for \"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\" returns successfully" Mar 2 14:34:24.376299 containerd[1549]: time="2026-03-02T14:34:24.375877616Z" level=error msg="ContainerStatus for \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\": not found" Mar 2 14:34:24.376485 kubelet[2802]: I0302 14:34:24.375438 2802 scope.go:117] "RemoveContainer" containerID="a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5" Mar 2 14:34:24.376485 kubelet[2802]: E0302 14:34:24.376254 2802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\": not found" containerID="a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5" Mar 2 14:34:24.376485 kubelet[2802]: I0302 14:34:24.376297 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5"} err="failed to get container status \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0368bd48664792a19b386dab6742bed76a2f064fc9575a8b54f1d283d49f1c5\": not found" Mar 2 14:34:24.376485 kubelet[2802]: I0302 14:34:24.376323 2802 scope.go:117] "RemoveContainer" containerID="12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e" Mar 2 14:34:24.379209 containerd[1549]: time="2026-03-02T14:34:24.377336745Z" level=error msg="ContainerStatus for \"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\": not found" Mar 2 14:34:24.383348 kubelet[2802]: E0302 14:34:24.379641 2802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\": not found" containerID="12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e" Mar 2 14:34:24.383348 kubelet[2802]: I0302 14:34:24.380216 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e"} err="failed to get container status \"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\": rpc error: code = NotFound desc = an error occurred when try to find container \"12116abf7bd61d270d016db8e3cd307552e0f7967df88cbd5b4b7bec6bcc916e\": not found" Mar 2 14:34:24.383348 kubelet[2802]: I0302 14:34:24.381490 2802 scope.go:117] "RemoveContainer" containerID="d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec" Mar 2 14:34:24.383348 kubelet[2802]: E0302 14:34:24.382270 2802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\": not found" containerID="d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec" Mar 2 14:34:24.383348 kubelet[2802]: I0302 14:34:24.382566 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec"} err="failed to get container status \"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\": rpc error: code = NotFound desc = an error occurred when try to find container \"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\": not found" Mar 2 14:34:24.383348 kubelet[2802]: I0302 14:34:24.382586 2802 scope.go:117] "RemoveContainer" containerID="19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae" Mar 2 14:34:24.383593 containerd[1549]: time="2026-03-02T14:34:24.382136378Z" level=error msg="ContainerStatus for \"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d974da7d2a7dffb563128a090c76b4f2241e27def20e7872ddd131bd98238aec\": not found" Mar 2 14:34:24.383593 containerd[1549]: time="2026-03-02T14:34:24.382867375Z" level=error msg="ContainerStatus for \"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\": not found" Mar 2 14:34:24.383772 kubelet[2802]: E0302 14:34:24.382987 2802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\": not found" containerID="19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae" Mar 2 14:34:24.383772 kubelet[2802]: I0302 14:34:24.383014 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae"} err="failed to get container status \"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\": rpc error: code = NotFound desc = an error occurred when try to find container \"19e47d07b6d63dd890f5b052a10d9a51b8751d032f690d86433d15405f98daae\": not found" Mar 2 14:34:24.383772 kubelet[2802]: I0302 14:34:24.383035 2802 scope.go:117] "RemoveContainer" containerID="83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1" Mar 2 14:34:24.384377 containerd[1549]: time="2026-03-02T14:34:24.384273756Z" level=error msg="ContainerStatus for \"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\": not found" Mar 2 14:34:24.384457 kubelet[2802]: E0302 14:34:24.384439 2802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\": not found" containerID="83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1" Mar 2 14:34:24.384513 kubelet[2802]: I0302 14:34:24.384464 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1"} err="failed to get container status \"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\": rpc error: code = NotFound desc = an error occurred when try to find container \"83d72e262167998ddb9c02584a8bd34018255b55f56f821a95d159446e87bda1\": not found" Mar 2 14:34:24.495837 kubelet[2802]: I0302 14:34:24.494523 2802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b6e507-71c8-4023-90dc-a9e9a453dfd8" path="/var/lib/kubelet/pods/a8b6e507-71c8-4023-90dc-a9e9a453dfd8/volumes" Mar 2 14:34:24.495837 kubelet[2802]: I0302 14:34:24.495625 2802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e30928-0e34-488f-826e-eefe5d9bc161" path="/var/lib/kubelet/pods/c7e30928-0e34-488f-826e-eefe5d9bc161/volumes" Mar 2 14:34:24.701265 sshd[4834]: Connection closed by 10.0.0.1 port 53884 Mar 2 14:34:24.701871 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Mar 2 14:34:24.709588 systemd[1]: sshd@33-10.0.0.8:22-10.0.0.1:53884.service: Deactivated successfully. Mar 2 14:34:24.714500 systemd[1]: session-34.scope: Deactivated successfully. Mar 2 14:34:24.716201 systemd[1]: session-34.scope: Consumed 1.056s CPU time, 24.2M memory peak. Mar 2 14:34:24.721478 systemd-logind[1534]: Session 34 logged out. Waiting for processes to exit. Mar 2 14:34:24.739023 systemd[1]: Started sshd@34-10.0.0.8:22-10.0.0.1:53900.service - OpenSSH per-connection server daemon (10.0.0.1:53900). Mar 2 14:34:24.742052 systemd-logind[1534]: Removed session 34. Mar 2 14:34:24.843287 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 53900 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:34:24.846169 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:34:24.856291 systemd-logind[1534]: New session 35 of user core. Mar 2 14:34:24.871030 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 2 14:34:25.082119 kubelet[2802]: E0302 14:34:25.081923 2802 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:34:25.881529 sshd[4989]: Connection closed by 10.0.0.1 port 53900 Mar 2 14:34:25.881620 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Mar 2 14:34:25.897178 systemd[1]: sshd@34-10.0.0.8:22-10.0.0.1:53900.service: Deactivated successfully. Mar 2 14:34:25.901592 systemd[1]: session-35.scope: Deactivated successfully. Mar 2 14:34:25.904606 systemd-logind[1534]: Session 35 logged out. Waiting for processes to exit. Mar 2 14:34:25.911360 systemd[1]: Started sshd@35-10.0.0.8:22-10.0.0.1:53916.service - OpenSSH per-connection server daemon (10.0.0.1:53916). Mar 2 14:34:25.918020 systemd-logind[1534]: Removed session 35. Mar 2 14:34:26.017604 systemd[1]: Created slice kubepods-burstable-pod072e9fbd_877e_4e60_83b9_012a97c78190.slice - libcontainer container kubepods-burstable-pod072e9fbd_877e_4e60_83b9_012a97c78190.slice. Mar 2 14:34:26.018848 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 53916 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:34:26.020641 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:34:26.039604 systemd-logind[1534]: New session 36 of user core. Mar 2 14:34:26.052573 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 2 14:34:26.089296 sshd[5004]: Connection closed by 10.0.0.1 port 53916 Mar 2 14:34:26.091955 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Mar 2 14:34:26.103844 systemd[1]: sshd@35-10.0.0.8:22-10.0.0.1:53916.service: Deactivated successfully. Mar 2 14:34:26.106983 systemd[1]: session-36.scope: Deactivated successfully. Mar 2 14:34:26.113024 systemd-logind[1534]: Session 36 logged out. Waiting for processes to exit. Mar 2 14:34:26.115304 systemd[1]: Started sshd@36-10.0.0.8:22-10.0.0.1:53918.service - OpenSSH per-connection server daemon (10.0.0.1:53918). Mar 2 14:34:26.119959 systemd-logind[1534]: Removed session 36. Mar 2 14:34:26.163649 kubelet[2802]: I0302 14:34:26.163045 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/072e9fbd-877e-4e60-83b9-012a97c78190-xtables-lock\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.163649 kubelet[2802]: I0302 14:34:26.163226 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/072e9fbd-877e-4e60-83b9-012a97c78190-host-proc-sys-net\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.163649 kubelet[2802]: I0302 14:34:26.163255 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/072e9fbd-877e-4e60-83b9-012a97c78190-host-proc-sys-kernel\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.163649 kubelet[2802]: I0302 14:34:26.163282 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/072e9fbd-877e-4e60-83b9-012a97c78190-cni-path\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.163649 kubelet[2802]: I0302 14:34:26.163305 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/072e9fbd-877e-4e60-83b9-012a97c78190-cilium-run\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.163649 kubelet[2802]: I0302 14:34:26.163324 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/072e9fbd-877e-4e60-83b9-012a97c78190-bpf-maps\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.164937 kubelet[2802]: I0302 14:34:26.163345 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/072e9fbd-877e-4e60-83b9-012a97c78190-cilium-ipsec-secrets\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.164937 kubelet[2802]: I0302 14:34:26.163368 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/072e9fbd-877e-4e60-83b9-012a97c78190-hubble-tls\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.164937 kubelet[2802]: I0302 14:34:26.163388 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/072e9fbd-877e-4e60-83b9-012a97c78190-etc-cni-netd\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.164937 kubelet[2802]: I0302 14:34:26.163408 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/072e9fbd-877e-4e60-83b9-012a97c78190-hostproc\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.164937 kubelet[2802]: I0302 14:34:26.163430 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/072e9fbd-877e-4e60-83b9-012a97c78190-cilium-cgroup\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.164937 kubelet[2802]: I0302 14:34:26.163452 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/072e9fbd-877e-4e60-83b9-012a97c78190-clustermesh-secrets\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.165355 kubelet[2802]: I0302 14:34:26.163474 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/072e9fbd-877e-4e60-83b9-012a97c78190-cilium-config-path\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.165355 kubelet[2802]: I0302 14:34:26.163498 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/072e9fbd-877e-4e60-83b9-012a97c78190-lib-modules\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.165355 kubelet[2802]: I0302 14:34:26.163521 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7lqh\" (UniqueName: \"kubernetes.io/projected/072e9fbd-877e-4e60-83b9-012a97c78190-kube-api-access-v7lqh\") pod \"cilium-ld2sm\" (UID: \"072e9fbd-877e-4e60-83b9-012a97c78190\") " pod="kube-system/cilium-ld2sm" Mar 2 14:34:26.211617 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 53918 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:34:26.216319 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:34:26.234045 systemd-logind[1534]: New session 37 of user core. Mar 2 14:34:26.248332 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 2 14:34:26.332537 kubelet[2802]: E0302 14:34:26.332272 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:26.334863 containerd[1549]: time="2026-03-02T14:34:26.334432642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ld2sm,Uid:072e9fbd-877e-4e60-83b9-012a97c78190,Namespace:kube-system,Attempt:0,}" Mar 2 14:34:26.444398 containerd[1549]: time="2026-03-02T14:34:26.444208341Z" level=info msg="connecting to shim 9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016" address="unix:///run/containerd/s/f48bb4f48197096a3ee918ef54f99cf24f9361f275206f38de0598ca400de585" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:34:26.620405 systemd[1]: Started cri-containerd-9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016.scope - libcontainer container 9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016. Mar 2 14:34:26.712605 containerd[1549]: time="2026-03-02T14:34:26.711480061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ld2sm,Uid:072e9fbd-877e-4e60-83b9-012a97c78190,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016\"" Mar 2 14:34:26.713735 kubelet[2802]: E0302 14:34:26.713417 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:26.730422 containerd[1549]: time="2026-03-02T14:34:26.728040146Z" level=info msg="CreateContainer within sandbox \"9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 14:34:26.761416 containerd[1549]: time="2026-03-02T14:34:26.760950702Z" level=info msg="Container 3b73d57adbb44afb2205175960359ad27a8169624c0dd9b4da7c4fc7f5b3b092: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:34:26.780285 containerd[1549]: time="2026-03-02T14:34:26.780123816Z" level=info msg="CreateContainer within sandbox \"9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3b73d57adbb44afb2205175960359ad27a8169624c0dd9b4da7c4fc7f5b3b092\"" Mar 2 14:34:26.783563 containerd[1549]: time="2026-03-02T14:34:26.783366062Z" level=info msg="StartContainer for \"3b73d57adbb44afb2205175960359ad27a8169624c0dd9b4da7c4fc7f5b3b092\"" Mar 2 14:34:26.789630 containerd[1549]: time="2026-03-02T14:34:26.785011388Z" level=info msg="connecting to shim 3b73d57adbb44afb2205175960359ad27a8169624c0dd9b4da7c4fc7f5b3b092" address="unix:///run/containerd/s/f48bb4f48197096a3ee918ef54f99cf24f9361f275206f38de0598ca400de585" protocol=ttrpc version=3 Mar 2 14:34:26.850389 systemd[1]: Started cri-containerd-3b73d57adbb44afb2205175960359ad27a8169624c0dd9b4da7c4fc7f5b3b092.scope - libcontainer container 3b73d57adbb44afb2205175960359ad27a8169624c0dd9b4da7c4fc7f5b3b092. Mar 2 14:34:26.964370 containerd[1549]: time="2026-03-02T14:34:26.963969879Z" level=info msg="StartContainer for \"3b73d57adbb44afb2205175960359ad27a8169624c0dd9b4da7c4fc7f5b3b092\" returns successfully" Mar 2 14:34:26.995522 systemd[1]: cri-containerd-3b73d57adbb44afb2205175960359ad27a8169624c0dd9b4da7c4fc7f5b3b092.scope: Deactivated successfully. Mar 2 14:34:27.007482 containerd[1549]: time="2026-03-02T14:34:27.007391263Z" level=info msg="received container exit event container_id:\"3b73d57adbb44afb2205175960359ad27a8169624c0dd9b4da7c4fc7f5b3b092\" id:\"3b73d57adbb44afb2205175960359ad27a8169624c0dd9b4da7c4fc7f5b3b092\" pid:5085 exited_at:{seconds:1772462067 nanos:6587493}" Mar 2 14:34:27.223276 kubelet[2802]: E0302 14:34:27.222453 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:27.250297 containerd[1549]: time="2026-03-02T14:34:27.250024022Z" level=info msg="CreateContainer within sandbox \"9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 14:34:27.281845 containerd[1549]: time="2026-03-02T14:34:27.281657874Z" level=info msg="Container ea9dc4499bbbeb0235b3c86bf455d0c976144921079ba77a5c4df5c55aaee81a: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:34:27.290291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3291662003.mount: Deactivated successfully. Mar 2 14:34:27.318285 containerd[1549]: time="2026-03-02T14:34:27.317900579Z" level=info msg="CreateContainer within sandbox \"9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ea9dc4499bbbeb0235b3c86bf455d0c976144921079ba77a5c4df5c55aaee81a\"" Mar 2 14:34:27.320815 containerd[1549]: time="2026-03-02T14:34:27.319808721Z" level=info msg="StartContainer for \"ea9dc4499bbbeb0235b3c86bf455d0c976144921079ba77a5c4df5c55aaee81a\"" Mar 2 14:34:27.321465 containerd[1549]: time="2026-03-02T14:34:27.321434560Z" level=info msg="connecting to shim ea9dc4499bbbeb0235b3c86bf455d0c976144921079ba77a5c4df5c55aaee81a" address="unix:///run/containerd/s/f48bb4f48197096a3ee918ef54f99cf24f9361f275206f38de0598ca400de585" protocol=ttrpc version=3 Mar 2 14:34:27.392953 systemd[1]: Started cri-containerd-ea9dc4499bbbeb0235b3c86bf455d0c976144921079ba77a5c4df5c55aaee81a.scope - libcontainer container ea9dc4499bbbeb0235b3c86bf455d0c976144921079ba77a5c4df5c55aaee81a. Mar 2 14:34:27.567611 containerd[1549]: time="2026-03-02T14:34:27.565883185Z" level=info msg="StartContainer for \"ea9dc4499bbbeb0235b3c86bf455d0c976144921079ba77a5c4df5c55aaee81a\" returns successfully" Mar 2 14:34:27.589430 systemd[1]: cri-containerd-ea9dc4499bbbeb0235b3c86bf455d0c976144921079ba77a5c4df5c55aaee81a.scope: Deactivated successfully. Mar 2 14:34:27.594191 containerd[1549]: time="2026-03-02T14:34:27.592948861Z" level=info msg="received container exit event container_id:\"ea9dc4499bbbeb0235b3c86bf455d0c976144921079ba77a5c4df5c55aaee81a\" id:\"ea9dc4499bbbeb0235b3c86bf455d0c976144921079ba77a5c4df5c55aaee81a\" pid:5130 exited_at:{seconds:1772462067 nanos:589450232}" Mar 2 14:34:27.678922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea9dc4499bbbeb0235b3c86bf455d0c976144921079ba77a5c4df5c55aaee81a-rootfs.mount: Deactivated successfully. Mar 2 14:34:28.236457 kubelet[2802]: E0302 14:34:28.236359 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:28.253772 containerd[1549]: time="2026-03-02T14:34:28.253324585Z" level=info msg="CreateContainer within sandbox \"9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 14:34:28.295169 containerd[1549]: time="2026-03-02T14:34:28.294643169Z" level=info msg="Container 3c77e361598279d92d461bc9b5a05c1edf764c5ea8ca54ece3c291dc1d018785: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:34:28.320252 containerd[1549]: time="2026-03-02T14:34:28.319888710Z" level=info msg="CreateContainer within sandbox \"9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3c77e361598279d92d461bc9b5a05c1edf764c5ea8ca54ece3c291dc1d018785\"" Mar 2 14:34:28.321402 containerd[1549]: time="2026-03-02T14:34:28.320934344Z" level=info msg="StartContainer for \"3c77e361598279d92d461bc9b5a05c1edf764c5ea8ca54ece3c291dc1d018785\"" Mar 2 14:34:28.324218 containerd[1549]: time="2026-03-02T14:34:28.324127074Z" level=info msg="connecting to shim 3c77e361598279d92d461bc9b5a05c1edf764c5ea8ca54ece3c291dc1d018785" address="unix:///run/containerd/s/f48bb4f48197096a3ee918ef54f99cf24f9361f275206f38de0598ca400de585" protocol=ttrpc version=3 Mar 2 14:34:28.387271 systemd[1]: Started cri-containerd-3c77e361598279d92d461bc9b5a05c1edf764c5ea8ca54ece3c291dc1d018785.scope - libcontainer container 3c77e361598279d92d461bc9b5a05c1edf764c5ea8ca54ece3c291dc1d018785. Mar 2 14:34:28.597184 containerd[1549]: time="2026-03-02T14:34:28.596329634Z" level=info msg="StartContainer for \"3c77e361598279d92d461bc9b5a05c1edf764c5ea8ca54ece3c291dc1d018785\" returns successfully" Mar 2 14:34:28.603622 systemd[1]: cri-containerd-3c77e361598279d92d461bc9b5a05c1edf764c5ea8ca54ece3c291dc1d018785.scope: Deactivated successfully. Mar 2 14:34:28.624142 containerd[1549]: time="2026-03-02T14:34:28.623852165Z" level=info msg="received container exit event container_id:\"3c77e361598279d92d461bc9b5a05c1edf764c5ea8ca54ece3c291dc1d018785\" id:\"3c77e361598279d92d461bc9b5a05c1edf764c5ea8ca54ece3c291dc1d018785\" pid:5175 exited_at:{seconds:1772462068 nanos:622961250}" Mar 2 14:34:28.705463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c77e361598279d92d461bc9b5a05c1edf764c5ea8ca54ece3c291dc1d018785-rootfs.mount: Deactivated successfully. Mar 2 14:34:28.863336 kubelet[2802]: I0302 14:34:28.862649 2802 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-02T14:34:28Z","lastTransitionTime":"2026-03-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 2 14:34:29.264239 kubelet[2802]: E0302 14:34:29.263934 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:29.301834 containerd[1549]: time="2026-03-02T14:34:29.298512963Z" level=info msg="CreateContainer within sandbox \"9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 14:34:29.349512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599445592.mount: Deactivated successfully. Mar 2 14:34:29.363847 containerd[1549]: time="2026-03-02T14:34:29.362220149Z" level=info msg="Container 81e8241689a56b001caca64cdaa5251d1ec1284c017194ce55492a8e31d56956: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:34:29.408161 containerd[1549]: time="2026-03-02T14:34:29.407559667Z" level=info msg="CreateContainer within sandbox \"9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"81e8241689a56b001caca64cdaa5251d1ec1284c017194ce55492a8e31d56956\"" Mar 2 14:34:29.428483 containerd[1549]: time="2026-03-02T14:34:29.428433735Z" level=info msg="StartContainer for \"81e8241689a56b001caca64cdaa5251d1ec1284c017194ce55492a8e31d56956\"" Mar 2 14:34:29.462315 containerd[1549]: time="2026-03-02T14:34:29.462194180Z" level=info msg="connecting to shim 81e8241689a56b001caca64cdaa5251d1ec1284c017194ce55492a8e31d56956" address="unix:///run/containerd/s/f48bb4f48197096a3ee918ef54f99cf24f9361f275206f38de0598ca400de585" protocol=ttrpc version=3 Mar 2 14:34:29.548404 systemd[1]: Started cri-containerd-81e8241689a56b001caca64cdaa5251d1ec1284c017194ce55492a8e31d56956.scope - libcontainer container 81e8241689a56b001caca64cdaa5251d1ec1284c017194ce55492a8e31d56956. Mar 2 14:34:29.649636 systemd[1]: cri-containerd-81e8241689a56b001caca64cdaa5251d1ec1284c017194ce55492a8e31d56956.scope: Deactivated successfully. Mar 2 14:34:29.653850 containerd[1549]: time="2026-03-02T14:34:29.653508589Z" level=info msg="received container exit event container_id:\"81e8241689a56b001caca64cdaa5251d1ec1284c017194ce55492a8e31d56956\" id:\"81e8241689a56b001caca64cdaa5251d1ec1284c017194ce55492a8e31d56956\" pid:5212 exited_at:{seconds:1772462069 nanos:651892142}" Mar 2 14:34:29.679247 containerd[1549]: time="2026-03-02T14:34:29.678428005Z" level=info msg="StartContainer for \"81e8241689a56b001caca64cdaa5251d1ec1284c017194ce55492a8e31d56956\" returns successfully" Mar 2 14:34:29.729974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81e8241689a56b001caca64cdaa5251d1ec1284c017194ce55492a8e31d56956-rootfs.mount: Deactivated successfully. Mar 2 14:34:30.084304 kubelet[2802]: E0302 14:34:30.084015 2802 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:34:30.276946 kubelet[2802]: E0302 14:34:30.275308 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:30.288569 containerd[1549]: time="2026-03-02T14:34:30.288227848Z" level=info msg="CreateContainer within sandbox \"9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 14:34:30.325247 containerd[1549]: time="2026-03-02T14:34:30.324560382Z" level=info msg="Container 81388fec85179b9dbe1edda17e2e8f9a666448358c0207492931bf62c24b9227: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:34:30.358626 containerd[1549]: time="2026-03-02T14:34:30.356904016Z" level=info msg="CreateContainer within sandbox \"9ffd200741da6aacd64b410829af10276affbf501457b796443b4d1029993016\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"81388fec85179b9dbe1edda17e2e8f9a666448358c0207492931bf62c24b9227\"" Mar 2 14:34:30.361773 containerd[1549]: time="2026-03-02T14:34:30.361597137Z" level=info msg="StartContainer for \"81388fec85179b9dbe1edda17e2e8f9a666448358c0207492931bf62c24b9227\"" Mar 2 14:34:30.367519 containerd[1549]: time="2026-03-02T14:34:30.365477946Z" level=info msg="connecting to shim 81388fec85179b9dbe1edda17e2e8f9a666448358c0207492931bf62c24b9227" address="unix:///run/containerd/s/f48bb4f48197096a3ee918ef54f99cf24f9361f275206f38de0598ca400de585" protocol=ttrpc version=3 Mar 2 14:34:30.405201 systemd[1]: Started cri-containerd-81388fec85179b9dbe1edda17e2e8f9a666448358c0207492931bf62c24b9227.scope - libcontainer container 81388fec85179b9dbe1edda17e2e8f9a666448358c0207492931bf62c24b9227. Mar 2 14:34:30.531142 containerd[1549]: time="2026-03-02T14:34:30.530793240Z" level=info msg="StartContainer for \"81388fec85179b9dbe1edda17e2e8f9a666448358c0207492931bf62c24b9227\" returns successfully" Mar 2 14:34:31.296011 kubelet[2802]: E0302 14:34:31.295911 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:31.355330 kubelet[2802]: I0302 14:34:31.355263 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ld2sm" podStartSLOduration=6.355246296 podStartE2EDuration="6.355246296s" podCreationTimestamp="2026-03-02 14:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:34:31.354209851 +0000 UTC m=+443.957602857" watchObservedRunningTime="2026-03-02 14:34:31.355246296 +0000 UTC m=+443.958639262" Mar 2 14:34:31.527409 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 2 14:34:32.331271 kubelet[2802]: E0302 14:34:32.329654 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:37.065268 systemd-networkd[1452]: lxc_health: Link UP Mar 2 14:34:37.086452 systemd-networkd[1452]: lxc_health: Gained carrier Mar 2 14:34:38.225374 systemd-networkd[1452]: lxc_health: Gained IPv6LL Mar 2 14:34:38.334936 kubelet[2802]: E0302 14:34:38.333640 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:39.332641 kubelet[2802]: E0302 14:34:39.330641 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:40.333618 kubelet[2802]: E0302 14:34:40.333126 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:41.491477 kubelet[2802]: E0302 14:34:41.491371 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:34:42.456373 sshd[5014]: Connection closed by 10.0.0.1 port 53918 Mar 2 14:34:42.457244 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Mar 2 14:34:42.463575 systemd[1]: sshd@36-10.0.0.8:22-10.0.0.1:53918.service: Deactivated successfully. Mar 2 14:34:42.470297 systemd[1]: session-37.scope: Deactivated successfully. Mar 2 14:34:42.483114 systemd-logind[1534]: Session 37 logged out. Waiting for processes to exit. Mar 2 14:34:42.487291 systemd-logind[1534]: Removed session 37.