May 16 16:40:23.823859 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 14:52:24 -00 2025 May 16 16:40:23.823891 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e3be1f8a550c199f4f838f30cb661b44d98bde818b7f263cba125cc457a9c137 May 16 16:40:23.823901 kernel: BIOS-provided physical RAM map: May 16 16:40:23.823908 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 16:40:23.823914 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 16:40:23.823920 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 16:40:23.823927 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 16 16:40:23.823934 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 16:40:23.823942 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 16:40:23.823949 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 16:40:23.823955 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 16 16:40:23.823961 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 16:40:23.823967 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 16:40:23.823974 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 16:40:23.823984 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 16:40:23.823991 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 16:40:23.823998 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 16 16:40:23.824005 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 16 16:40:23.824011 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 16 16:40:23.824018 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 16 16:40:23.824025 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 16:40:23.824031 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 16:40:23.824038 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 16:40:23.824044 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 16:40:23.824051 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 16:40:23.824060 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 16:40:23.824066 kernel: NX (Execute Disable) protection: active May 16 16:40:23.824073 kernel: APIC: Static calls initialized May 16 16:40:23.824080 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 16 16:40:23.824086 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 16 16:40:23.824093 kernel: extended physical RAM map: May 16 16:40:23.824100 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 16:40:23.824107 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 16:40:23.824113 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 16:40:23.824120 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 16 16:40:23.824140 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 16:40:23.824150 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 16:40:23.824156 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 16:40:23.824163 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 16 16:40:23.824170 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 16 16:40:23.824189 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 16 16:40:23.824196 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 16 16:40:23.824205 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 16 16:40:23.824212 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 16:40:23.824219 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 16:40:23.824226 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 16:40:23.824233 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 16:40:23.824240 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 16:40:23.824247 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 16 16:40:23.824254 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 16 16:40:23.824261 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 16 16:40:23.824270 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 16 16:40:23.824277 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 16:40:23.824284 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 16:40:23.824291 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 16:40:23.824298 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 16:40:23.824305 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 16:40:23.824312 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 16:40:23.824319 kernel: efi: EFI v2.7 by EDK II May 16 16:40:23.824326 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 16 16:40:23.824333 kernel: random: crng init done May 16 16:40:23.824340 kernel: efi: Remove mem149: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 16 16:40:23.824347 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 16 16:40:23.824357 kernel: secureboot: Secure boot disabled May 16 16:40:23.824364 kernel: SMBIOS 2.8 present. May 16 16:40:23.824371 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 16 16:40:23.824378 kernel: DMI: Memory slots populated: 1/1 May 16 16:40:23.824384 kernel: Hypervisor detected: KVM May 16 16:40:23.824391 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 16:40:23.824399 kernel: kvm-clock: using sched offset of 3649157806 cycles May 16 16:40:23.824406 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 16:40:23.824413 kernel: tsc: Detected 2794.748 MHz processor May 16 16:40:23.824421 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 16:40:23.824428 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 16:40:23.824437 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 16 16:40:23.824445 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 16 16:40:23.824452 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 16:40:23.824459 kernel: Using GB pages for direct mapping May 16 16:40:23.824466 kernel: ACPI: Early table checksum verification disabled May 16 16:40:23.824473 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 16 16:40:23.824481 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 16 16:40:23.824488 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:40:23.824495 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:40:23.824504 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 16 16:40:23.824511 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:40:23.824519 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:40:23.824526 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:40:23.824542 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:40:23.824549 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 16 16:40:23.824556 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 16 16:40:23.824563 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 16 16:40:23.824573 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 16 16:40:23.824580 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 16 16:40:23.824595 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 16 16:40:23.824610 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 16 16:40:23.824631 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 16 16:40:23.824639 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 16 16:40:23.824646 kernel: No NUMA configuration found May 16 16:40:23.824653 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 16 16:40:23.824660 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 16 16:40:23.824667 kernel: Zone ranges: May 16 16:40:23.824678 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 16:40:23.824685 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 16 16:40:23.824699 kernel: Normal empty May 16 16:40:23.824706 kernel: Device empty May 16 16:40:23.824721 kernel: Movable zone start for each node May 16 16:40:23.824742 kernel: Early memory node ranges May 16 16:40:23.824749 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 16 16:40:23.824757 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 16 16:40:23.824764 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 16 16:40:23.824774 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 16 16:40:23.824781 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 16 16:40:23.824788 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 16 16:40:23.824795 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 16 16:40:23.824802 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 16 16:40:23.824810 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 16 16:40:23.824817 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 16:40:23.824824 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 16 16:40:23.824841 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 16 16:40:23.824848 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 16:40:23.824855 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 16 16:40:23.824863 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 16 16:40:23.824872 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 16 16:40:23.824880 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 16 16:40:23.824887 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 16 16:40:23.824895 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 16:40:23.824902 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 16:40:23.824912 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 16:40:23.824919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 16:40:23.824927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 16:40:23.824934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 16:40:23.824941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 16:40:23.824949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 16:40:23.824956 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 16:40:23.824964 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 16:40:23.824971 kernel: TSC deadline timer available May 16 16:40:23.824978 kernel: CPU topo: Max. logical packages: 1 May 16 16:40:23.824988 kernel: CPU topo: Max. logical dies: 1 May 16 16:40:23.824996 kernel: CPU topo: Max. dies per package: 1 May 16 16:40:23.825003 kernel: CPU topo: Max. threads per core: 1 May 16 16:40:23.825010 kernel: CPU topo: Num. cores per package: 4 May 16 16:40:23.825018 kernel: CPU topo: Num. threads per package: 4 May 16 16:40:23.825025 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 16 16:40:23.825032 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 16:40:23.825040 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 16:40:23.825047 kernel: kvm-guest: setup PV sched yield May 16 16:40:23.825057 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 16 16:40:23.825064 kernel: Booting paravirtualized kernel on KVM May 16 16:40:23.825072 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 16:40:23.825079 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 16:40:23.825087 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 16 16:40:23.825094 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 16 16:40:23.825102 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 16:40:23.825109 kernel: kvm-guest: PV spinlocks enabled May 16 16:40:23.825116 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 16:40:23.825139 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e3be1f8a550c199f4f838f30cb661b44d98bde818b7f263cba125cc457a9c137 May 16 16:40:23.825147 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 16:40:23.825154 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 16:40:23.825162 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 16:40:23.825169 kernel: Fallback order for Node 0: 0 May 16 16:40:23.825183 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 16 16:40:23.825191 kernel: Policy zone: DMA32 May 16 16:40:23.825198 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 16:40:23.825208 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 16:40:23.825216 kernel: ftrace: allocating 40065 entries in 157 pages May 16 16:40:23.825223 kernel: ftrace: allocated 157 pages with 5 groups May 16 16:40:23.825231 kernel: Dynamic Preempt: voluntary May 16 16:40:23.825238 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 16:40:23.825246 kernel: rcu: RCU event tracing is enabled. May 16 16:40:23.825254 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 16:40:23.825261 kernel: Trampoline variant of Tasks RCU enabled. May 16 16:40:23.825269 kernel: Rude variant of Tasks RCU enabled. May 16 16:40:23.825279 kernel: Tracing variant of Tasks RCU enabled. May 16 16:40:23.825286 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 16:40:23.825294 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 16:40:23.825302 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:40:23.825309 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:40:23.825317 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:40:23.825324 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 16:40:23.825332 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 16:40:23.825339 kernel: Console: colour dummy device 80x25 May 16 16:40:23.825346 kernel: printk: legacy console [ttyS0] enabled May 16 16:40:23.825356 kernel: ACPI: Core revision 20240827 May 16 16:40:23.825364 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 16:40:23.825371 kernel: APIC: Switch to symmetric I/O mode setup May 16 16:40:23.825379 kernel: x2apic enabled May 16 16:40:23.825386 kernel: APIC: Switched APIC routing to: physical x2apic May 16 16:40:23.825394 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 16:40:23.825401 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 16:40:23.825409 kernel: kvm-guest: setup PV IPIs May 16 16:40:23.825416 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 16:40:23.825426 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 16:40:23.825434 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 16:40:23.825441 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 16:40:23.825448 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 16:40:23.825456 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 16:40:23.825463 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 16:40:23.825471 kernel: Spectre V2 : Mitigation: Retpolines May 16 16:40:23.825478 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 16 16:40:23.825488 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 16 16:40:23.825495 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 16:40:23.825503 kernel: RETBleed: Mitigation: untrained return thunk May 16 16:40:23.825510 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 16:40:23.825518 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 16:40:23.825525 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 16:40:23.825533 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 16:40:23.825541 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 16:40:23.825548 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 16:40:23.825558 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 16:40:23.825566 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 16:40:23.825573 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 16:40:23.825581 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 16:40:23.825588 kernel: Freeing SMP alternatives memory: 32K May 16 16:40:23.825595 kernel: pid_max: default: 32768 minimum: 301 May 16 16:40:23.825603 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 16:40:23.825610 kernel: landlock: Up and running. May 16 16:40:23.825617 kernel: SELinux: Initializing. May 16 16:40:23.825627 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 16:40:23.825635 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 16:40:23.825642 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 16:40:23.825650 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 16:40:23.825657 kernel: ... version: 0 May 16 16:40:23.825664 kernel: ... bit width: 48 May 16 16:40:23.825672 kernel: ... generic registers: 6 May 16 16:40:23.825679 kernel: ... value mask: 0000ffffffffffff May 16 16:40:23.825686 kernel: ... max period: 00007fffffffffff May 16 16:40:23.825696 kernel: ... fixed-purpose events: 0 May 16 16:40:23.825703 kernel: ... event mask: 000000000000003f May 16 16:40:23.825710 kernel: signal: max sigframe size: 1776 May 16 16:40:23.825718 kernel: rcu: Hierarchical SRCU implementation. May 16 16:40:23.825725 kernel: rcu: Max phase no-delay instances is 400. May 16 16:40:23.825733 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 16:40:23.825740 kernel: smp: Bringing up secondary CPUs ... May 16 16:40:23.825748 kernel: smpboot: x86: Booting SMP configuration: May 16 16:40:23.825762 kernel: .... node #0, CPUs: #1 #2 #3 May 16 16:40:23.825781 kernel: smp: Brought up 1 node, 4 CPUs May 16 16:40:23.825792 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 16:40:23.825810 kernel: Memory: 2422664K/2565800K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 137196K reserved, 0K cma-reserved) May 16 16:40:23.825817 kernel: devtmpfs: initialized May 16 16:40:23.825824 kernel: x86/mm: Memory block size: 128MB May 16 16:40:23.825832 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 16 16:40:23.825840 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 16 16:40:23.825847 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 16 16:40:23.825855 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 16 16:40:23.825864 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 16 16:40:23.825872 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 16 16:40:23.825879 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 16:40:23.825887 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 16:40:23.825894 kernel: pinctrl core: initialized pinctrl subsystem May 16 16:40:23.825902 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 16:40:23.825909 kernel: audit: initializing netlink subsys (disabled) May 16 16:40:23.825916 kernel: audit: type=2000 audit(1747413622.196:1): state=initialized audit_enabled=0 res=1 May 16 16:40:23.825926 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 16:40:23.825933 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 16:40:23.825941 kernel: cpuidle: using governor menu May 16 16:40:23.825948 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 16:40:23.825955 kernel: dca service started, version 1.12.1 May 16 16:40:23.825963 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 16 16:40:23.825970 kernel: PCI: Using configuration type 1 for base access May 16 16:40:23.825978 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 16:40:23.825985 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 16:40:23.825995 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 16:40:23.826002 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 16:40:23.826009 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 16:40:23.826017 kernel: ACPI: Added _OSI(Module Device) May 16 16:40:23.826024 kernel: ACPI: Added _OSI(Processor Device) May 16 16:40:23.826031 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 16:40:23.826039 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 16:40:23.826046 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 16:40:23.826053 kernel: ACPI: Interpreter enabled May 16 16:40:23.826071 kernel: ACPI: PM: (supports S0 S3 S5) May 16 16:40:23.826086 kernel: ACPI: Using IOAPIC for interrupt routing May 16 16:40:23.826094 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 16:40:23.826101 kernel: PCI: Using E820 reservations for host bridge windows May 16 16:40:23.826109 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 16:40:23.826116 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 16:40:23.826303 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 16:40:23.826426 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 16:40:23.826544 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 16:40:23.826554 kernel: PCI host bridge to bus 0000:00 May 16 16:40:23.826672 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 16:40:23.826776 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 16:40:23.826882 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 16:40:23.826986 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 16 16:40:23.827088 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 16 16:40:23.827288 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 16 16:40:23.827394 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 16:40:23.827579 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 16 16:40:23.827713 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 16 16:40:23.827828 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 16 16:40:23.827941 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 16 16:40:23.828058 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 16 16:40:23.828198 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 16:40:23.828323 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 16:40:23.828449 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 16 16:40:23.828599 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 16 16:40:23.828715 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 16 16:40:23.828838 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 16 16:40:23.828958 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 16 16:40:23.829072 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 16 16:40:23.829230 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 16 16:40:23.829355 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 16 16:40:23.829469 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 16 16:40:23.829584 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 16 16:40:23.829698 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 16 16:40:23.829826 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 16 16:40:23.829955 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 16 16:40:23.830068 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 16:40:23.830222 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 16 16:40:23.830339 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 16 16:40:23.830452 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 16 16:40:23.830574 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 16 16:40:23.830692 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 16 16:40:23.830703 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 16:40:23.830710 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 16:40:23.830718 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 16:40:23.830725 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 16:40:23.830733 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 16:40:23.830740 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 16:40:23.830748 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 16:40:23.830757 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 16:40:23.830765 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 16:40:23.830772 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 16:40:23.830780 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 16:40:23.830787 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 16:40:23.830794 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 16:40:23.830802 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 16:40:23.830809 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 16:40:23.830816 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 16:40:23.830826 kernel: iommu: Default domain type: Translated May 16 16:40:23.830834 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 16:40:23.830841 kernel: efivars: Registered efivars operations May 16 16:40:23.830848 kernel: PCI: Using ACPI for IRQ routing May 16 16:40:23.830856 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 16:40:23.830863 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 16 16:40:23.830870 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 16 16:40:23.830878 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 16 16:40:23.830885 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 16 16:40:23.830894 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 16 16:40:23.830901 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 16 16:40:23.830909 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 16 16:40:23.830916 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 16 16:40:23.831029 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 16:40:23.831159 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 16:40:23.831284 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 16:40:23.831295 kernel: vgaarb: loaded May 16 16:40:23.831306 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 16:40:23.831313 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 16:40:23.831321 kernel: clocksource: Switched to clocksource kvm-clock May 16 16:40:23.831328 kernel: VFS: Disk quotas dquot_6.6.0 May 16 16:40:23.831336 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 16:40:23.831343 kernel: pnp: PnP ACPI init May 16 16:40:23.831485 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 16 16:40:23.831499 kernel: pnp: PnP ACPI: found 6 devices May 16 16:40:23.831509 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 16:40:23.831516 kernel: NET: Registered PF_INET protocol family May 16 16:40:23.831524 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 16:40:23.831532 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 16:40:23.831540 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 16:40:23.831548 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 16:40:23.831556 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 16:40:23.831563 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 16:40:23.831573 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 16:40:23.831581 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 16:40:23.831588 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 16:40:23.831596 kernel: NET: Registered PF_XDP protocol family May 16 16:40:23.831713 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 16 16:40:23.831828 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 16 16:40:23.831933 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 16:40:23.832037 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 16:40:23.832160 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 16:40:23.832278 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 16 16:40:23.832382 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 16 16:40:23.832486 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 16 16:40:23.832496 kernel: PCI: CLS 0 bytes, default 64 May 16 16:40:23.832504 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 16:40:23.832512 kernel: Initialise system trusted keyrings May 16 16:40:23.832522 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 16:40:23.832530 kernel: Key type asymmetric registered May 16 16:40:23.832538 kernel: Asymmetric key parser 'x509' registered May 16 16:40:23.832546 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 16 16:40:23.832553 kernel: io scheduler mq-deadline registered May 16 16:40:23.832561 kernel: io scheduler kyber registered May 16 16:40:23.832569 kernel: io scheduler bfq registered May 16 16:40:23.832577 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 16:40:23.832587 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 16:40:23.832595 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 16:40:23.832602 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 16:40:23.832610 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 16:40:23.832618 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 16:40:23.832626 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 16:40:23.832634 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 16:40:23.832642 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 16:40:23.832759 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 16:40:23.832873 kernel: rtc_cmos 00:04: registered as rtc0 May 16 16:40:23.832981 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T16:40:23 UTC (1747413623) May 16 16:40:23.833088 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 16:40:23.833099 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 16:40:23.833107 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 16 16:40:23.833118 kernel: efifb: probing for efifb May 16 16:40:23.833141 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 16 16:40:23.833148 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 16 16:40:23.833159 kernel: efifb: scrolling: redraw May 16 16:40:23.833167 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 16 16:40:23.833183 kernel: Console: switching to colour frame buffer device 160x50 May 16 16:40:23.833191 kernel: fb0: EFI VGA frame buffer device May 16 16:40:23.833199 kernel: pstore: Using crash dump compression: deflate May 16 16:40:23.833207 kernel: pstore: Registered efi_pstore as persistent store backend May 16 16:40:23.833214 kernel: NET: Registered PF_INET6 protocol family May 16 16:40:23.833222 kernel: Segment Routing with IPv6 May 16 16:40:23.833230 kernel: In-situ OAM (IOAM) with IPv6 May 16 16:40:23.833240 kernel: NET: Registered PF_PACKET protocol family May 16 16:40:23.833248 kernel: Key type dns_resolver registered May 16 16:40:23.833256 kernel: IPI shorthand broadcast: enabled May 16 16:40:23.833264 kernel: sched_clock: Marking stable (2738084624, 157476355)->(2918417597, -22856618) May 16 16:40:23.833271 kernel: registered taskstats version 1 May 16 16:40:23.833279 kernel: Loading compiled-in X.509 certificates May 16 16:40:23.833287 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 310304ddc2cf6c43796c9bf79d11c0543afdf71f' May 16 16:40:23.833295 kernel: Demotion targets for Node 0: null May 16 16:40:23.833302 kernel: Key type .fscrypt registered May 16 16:40:23.833312 kernel: Key type fscrypt-provisioning registered May 16 16:40:23.833319 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 16:40:23.833327 kernel: ima: Allocated hash algorithm: sha1 May 16 16:40:23.833335 kernel: ima: No architecture policies found May 16 16:40:23.833342 kernel: clk: Disabling unused clocks May 16 16:40:23.833350 kernel: Warning: unable to open an initial console. May 16 16:40:23.833358 kernel: Freeing unused kernel image (initmem) memory: 54416K May 16 16:40:23.833366 kernel: Write protecting the kernel read-only data: 24576k May 16 16:40:23.833375 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 16 16:40:23.833383 kernel: Run /init as init process May 16 16:40:23.833391 kernel: with arguments: May 16 16:40:23.833399 kernel: /init May 16 16:40:23.833406 kernel: with environment: May 16 16:40:23.833414 kernel: HOME=/ May 16 16:40:23.833421 kernel: TERM=linux May 16 16:40:23.833429 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 16:40:23.833438 systemd[1]: Successfully made /usr/ read-only. May 16 16:40:23.833450 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 16:40:23.833459 systemd[1]: Detected virtualization kvm. May 16 16:40:23.833468 systemd[1]: Detected architecture x86-64. May 16 16:40:23.833475 systemd[1]: Running in initrd. May 16 16:40:23.833483 systemd[1]: No hostname configured, using default hostname. May 16 16:40:23.833492 systemd[1]: Hostname set to . May 16 16:40:23.833500 systemd[1]: Initializing machine ID from VM UUID. May 16 16:40:23.833509 systemd[1]: Queued start job for default target initrd.target. May 16 16:40:23.833520 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:40:23.833528 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:40:23.833537 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 16:40:23.833545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 16:40:23.833554 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 16:40:23.833563 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 16:40:23.833575 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 16:40:23.833583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 16:40:23.833591 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:40:23.833600 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 16:40:23.833608 systemd[1]: Reached target paths.target - Path Units. May 16 16:40:23.833616 systemd[1]: Reached target slices.target - Slice Units. May 16 16:40:23.833624 systemd[1]: Reached target swap.target - Swaps. May 16 16:40:23.833633 systemd[1]: Reached target timers.target - Timer Units. May 16 16:40:23.833641 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 16:40:23.833651 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 16:40:23.833659 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 16:40:23.833668 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 16:40:23.833676 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 16:40:23.833684 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 16:40:23.833693 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:40:23.833701 systemd[1]: Reached target sockets.target - Socket Units. May 16 16:40:23.833709 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 16:40:23.833720 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 16:40:23.833728 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 16:40:23.833737 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 16:40:23.833745 systemd[1]: Starting systemd-fsck-usr.service... May 16 16:40:23.833753 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 16:40:23.833762 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 16:40:23.833770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:40:23.833778 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 16:40:23.833789 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:40:23.833798 systemd[1]: Finished systemd-fsck-usr.service. May 16 16:40:23.833825 systemd-journald[220]: Collecting audit messages is disabled. May 16 16:40:23.833846 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 16:40:23.833855 systemd-journald[220]: Journal started May 16 16:40:23.833873 systemd-journald[220]: Runtime Journal (/run/log/journal/b3b61d7399e04f4582865cb5aa2924a8) is 6M, max 48.5M, 42.4M free. May 16 16:40:23.826226 systemd-modules-load[222]: Inserted module 'overlay' May 16 16:40:23.835695 systemd[1]: Started systemd-journald.service - Journal Service. May 16 16:40:23.836203 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 16:40:23.839308 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:40:23.843944 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 16:40:23.848143 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 16:40:23.852305 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 16:40:23.854817 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 16:40:23.856293 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 16:40:23.858628 systemd-modules-load[222]: Inserted module 'br_netfilter' May 16 16:40:23.859146 kernel: Bridge firewalling registered May 16 16:40:23.860042 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 16:40:23.860998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:40:23.861342 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:40:23.872383 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:40:23.881071 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:40:23.883043 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 16:40:23.893296 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 16:40:23.896610 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 16:40:23.927580 systemd-resolved[256]: Positive Trust Anchors: May 16 16:40:23.927595 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 16:40:23.927626 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 16:40:23.930045 systemd-resolved[256]: Defaulting to hostname 'linux'. May 16 16:40:23.931044 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 16:40:23.940503 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e3be1f8a550c199f4f838f30cb661b44d98bde818b7f263cba125cc457a9c137 May 16 16:40:23.937391 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 16:40:24.034160 kernel: SCSI subsystem initialized May 16 16:40:24.043151 kernel: Loading iSCSI transport class v2.0-870. May 16 16:40:24.053156 kernel: iscsi: registered transport (tcp) May 16 16:40:24.074148 kernel: iscsi: registered transport (qla4xxx) May 16 16:40:24.074189 kernel: QLogic iSCSI HBA Driver May 16 16:40:24.094667 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 16:40:24.116285 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:40:24.118421 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 16:40:24.181634 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 16:40:24.185537 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 16:40:24.254150 kernel: raid6: avx2x4 gen() 30566 MB/s May 16 16:40:24.271146 kernel: raid6: avx2x2 gen() 31059 MB/s May 16 16:40:24.288238 kernel: raid6: avx2x1 gen() 25776 MB/s May 16 16:40:24.288257 kernel: raid6: using algorithm avx2x2 gen() 31059 MB/s May 16 16:40:24.306242 kernel: raid6: .... xor() 19899 MB/s, rmw enabled May 16 16:40:24.306271 kernel: raid6: using avx2x2 recovery algorithm May 16 16:40:24.326159 kernel: xor: automatically using best checksumming function avx May 16 16:40:24.490155 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 16:40:24.499313 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 16:40:24.501958 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:40:24.536427 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 16 16:40:24.541794 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:40:24.543841 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 16:40:24.566067 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation May 16 16:40:24.596508 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 16:40:24.597934 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 16:40:24.689343 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:40:24.694259 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 16:40:24.720145 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 16:40:24.731719 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 16:40:24.738521 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 16:40:24.738538 kernel: GPT:9289727 != 19775487 May 16 16:40:24.738548 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 16:40:24.738558 kernel: GPT:9289727 != 19775487 May 16 16:40:24.738567 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 16:40:24.738583 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:40:24.755209 kernel: cryptd: max_cpu_qlen set to 1000 May 16 16:40:24.756176 kernel: libata version 3.00 loaded. May 16 16:40:24.765162 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 16:40:24.765195 kernel: ahci 0000:00:1f.2: version 3.0 May 16 16:40:24.794234 kernel: AES CTR mode by8 optimization enabled May 16 16:40:24.794250 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 16:40:24.794261 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 16 16:40:24.794410 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 16 16:40:24.794563 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 16:40:24.794697 kernel: scsi host0: ahci May 16 16:40:24.794840 kernel: scsi host1: ahci May 16 16:40:24.794977 kernel: scsi host2: ahci May 16 16:40:24.795256 kernel: scsi host3: ahci May 16 16:40:24.795396 kernel: scsi host4: ahci May 16 16:40:24.795531 kernel: scsi host5: ahci May 16 16:40:24.795665 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 16 16:40:24.795676 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 16 16:40:24.795686 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 16 16:40:24.795696 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 16 16:40:24.795705 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 16 16:40:24.795715 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 16 16:40:24.786074 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:40:24.786279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:40:24.788065 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:40:24.793279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:40:24.815195 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 16:40:24.833173 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 16:40:24.848849 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 16:40:24.850112 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 16:40:24.858167 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 16:40:24.859038 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 16:40:24.860471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:40:24.860519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:40:24.865653 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:40:24.877602 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:40:24.878859 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 16:40:24.887718 disk-uuid[630]: Primary Header is updated. May 16 16:40:24.887718 disk-uuid[630]: Secondary Entries is updated. May 16 16:40:24.887718 disk-uuid[630]: Secondary Header is updated. May 16 16:40:24.892162 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:40:24.896161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:40:24.897251 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:40:25.102046 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 16:40:25.102116 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 16:40:25.102154 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 16:40:25.102167 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 16:40:25.102180 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 16:40:25.103153 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 16:40:25.104161 kernel: ata3.00: applying bridge limits May 16 16:40:25.104175 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 16:40:25.105156 kernel: ata3.00: configured for UDMA/100 May 16 16:40:25.106152 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 16:40:25.151712 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 16:40:25.171699 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 16:40:25.171713 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 16:40:25.527982 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 16:40:25.531053 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 16:40:25.533757 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:40:25.536037 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 16:40:25.538959 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 16:40:25.565801 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 16:40:25.898166 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:40:25.898449 disk-uuid[634]: The operation has completed successfully. May 16 16:40:25.935737 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 16:40:25.935851 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 16:40:25.965063 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 16:40:25.990318 sh[673]: Success May 16 16:40:26.007556 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 16:40:26.007602 kernel: device-mapper: uevent: version 1.0.3 May 16 16:40:26.008659 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 16:40:26.018182 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 16 16:40:26.047774 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 16:40:26.050757 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 16:40:26.064424 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 16:40:26.071150 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 16:40:26.071176 kernel: BTRFS: device fsid 85b2a34c-237f-4a0a-87d0-0a783de0f256 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (685) May 16 16:40:26.073154 kernel: BTRFS info (device dm-0): first mount of filesystem 85b2a34c-237f-4a0a-87d0-0a783de0f256 May 16 16:40:26.074775 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 16:40:26.074792 kernel: BTRFS info (device dm-0): using free-space-tree May 16 16:40:26.079783 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 16:40:26.080307 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 16:40:26.081581 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 16:40:26.082576 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 16:40:26.085050 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 16:40:26.110849 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (719) May 16 16:40:26.110898 kernel: BTRFS info (device vda6): first mount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:40:26.110912 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 16:40:26.111926 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:40:26.119176 kernel: BTRFS info (device vda6): last unmount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:40:26.120065 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 16:40:26.121200 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 16:40:26.195324 ignition[761]: Ignition 2.21.0 May 16 16:40:26.195339 ignition[761]: Stage: fetch-offline May 16 16:40:26.195373 ignition[761]: no configs at "/usr/lib/ignition/base.d" May 16 16:40:26.195382 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:40:26.195465 ignition[761]: parsed url from cmdline: "" May 16 16:40:26.195468 ignition[761]: no config URL provided May 16 16:40:26.195473 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" May 16 16:40:26.195481 ignition[761]: no config at "/usr/lib/ignition/user.ign" May 16 16:40:26.195504 ignition[761]: op(1): [started] loading QEMU firmware config module May 16 16:40:26.195509 ignition[761]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 16:40:26.203354 ignition[761]: op(1): [finished] loading QEMU firmware config module May 16 16:40:26.220526 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 16:40:26.222427 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 16:40:26.248676 ignition[761]: parsing config with SHA512: b3492f22580275260278958ab084dff3b75335d3a869ed6c136010826a55d3fbe983ad2dd5be8cb0765ebe7da3e312f37d8b0a93ddf9c5fef86b577e16669f26 May 16 16:40:26.255270 unknown[761]: fetched base config from "system" May 16 16:40:26.255403 unknown[761]: fetched user config from "qemu" May 16 16:40:26.255810 ignition[761]: fetch-offline: fetch-offline passed May 16 16:40:26.255862 ignition[761]: Ignition finished successfully May 16 16:40:26.259085 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 16:40:26.266323 systemd-networkd[863]: lo: Link UP May 16 16:40:26.266333 systemd-networkd[863]: lo: Gained carrier May 16 16:40:26.267770 systemd-networkd[863]: Enumeration completed May 16 16:40:26.267842 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 16:40:26.268095 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:40:26.268099 systemd-networkd[863]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 16:40:26.269524 systemd-networkd[863]: eth0: Link UP May 16 16:40:26.269527 systemd-networkd[863]: eth0: Gained carrier May 16 16:40:26.269535 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:40:26.269610 systemd[1]: Reached target network.target - Network. May 16 16:40:26.271277 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 16:40:26.274509 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 16:40:26.290171 systemd-networkd[863]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 16:40:26.313526 ignition[867]: Ignition 2.21.0 May 16 16:40:26.313539 ignition[867]: Stage: kargs May 16 16:40:26.313676 ignition[867]: no configs at "/usr/lib/ignition/base.d" May 16 16:40:26.313687 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:40:26.317364 ignition[867]: kargs: kargs passed May 16 16:40:26.317434 ignition[867]: Ignition finished successfully May 16 16:40:26.322710 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 16:40:26.324830 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 16:40:26.350845 ignition[876]: Ignition 2.21.0 May 16 16:40:26.350858 ignition[876]: Stage: disks May 16 16:40:26.351159 ignition[876]: no configs at "/usr/lib/ignition/base.d" May 16 16:40:26.351174 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:40:26.352755 ignition[876]: disks: disks passed May 16 16:40:26.352829 ignition[876]: Ignition finished successfully May 16 16:40:26.358161 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 16:40:26.358437 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 16:40:26.358862 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 16:40:26.359445 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 16:40:26.359815 systemd[1]: Reached target sysinit.target - System Initialization. May 16 16:40:26.360002 systemd[1]: Reached target basic.target - Basic System. May 16 16:40:26.361301 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 16:40:26.387366 systemd-fsck[886]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 16:40:26.395413 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 16:40:26.398271 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 16:40:26.499148 kernel: EXT4-fs (vda9): mounted filesystem 07293137-138a-42a3-a962-d767034e11a7 r/w with ordered data mode. Quota mode: none. May 16 16:40:26.499637 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 16:40:26.500290 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 16:40:26.503708 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 16:40:26.504547 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 16:40:26.506426 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 16:40:26.506463 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 16:40:26.506486 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 16:40:26.521367 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 16:40:26.523625 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 16:40:26.526769 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (894) May 16 16:40:26.528902 kernel: BTRFS info (device vda6): first mount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:40:26.528948 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 16:40:26.528959 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:40:26.532428 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 16:40:26.567212 initrd-setup-root[918]: cut: /sysroot/etc/passwd: No such file or directory May 16 16:40:26.571434 initrd-setup-root[925]: cut: /sysroot/etc/group: No such file or directory May 16 16:40:26.575327 initrd-setup-root[932]: cut: /sysroot/etc/shadow: No such file or directory May 16 16:40:26.579389 initrd-setup-root[939]: cut: /sysroot/etc/gshadow: No such file or directory May 16 16:40:26.662871 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 16:40:26.663848 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 16:40:26.666577 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 16:40:26.683159 kernel: BTRFS info (device vda6): last unmount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:40:26.698309 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 16:40:26.712739 ignition[1009]: INFO : Ignition 2.21.0 May 16 16:40:26.712739 ignition[1009]: INFO : Stage: mount May 16 16:40:26.714436 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:40:26.714436 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:40:26.717712 ignition[1009]: INFO : mount: mount passed May 16 16:40:26.717712 ignition[1009]: INFO : Ignition finished successfully May 16 16:40:26.720308 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 16:40:26.722331 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 16:40:27.071670 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 16:40:27.073305 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 16:40:27.095816 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1021) May 16 16:40:27.095856 kernel: BTRFS info (device vda6): first mount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:40:27.095871 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 16:40:27.096701 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:40:27.100526 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 16:40:27.134149 ignition[1038]: INFO : Ignition 2.21.0 May 16 16:40:27.134149 ignition[1038]: INFO : Stage: files May 16 16:40:27.136153 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:40:27.136153 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:40:27.138627 ignition[1038]: DEBUG : files: compiled without relabeling support, skipping May 16 16:40:27.138627 ignition[1038]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 16:40:27.138627 ignition[1038]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 16:40:27.143147 ignition[1038]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 16:40:27.143147 ignition[1038]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 16:40:27.143147 ignition[1038]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 16:40:27.143147 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 16 16:40:27.143147 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 16 16:40:27.140895 unknown[1038]: wrote ssh authorized keys file for user: core May 16 16:40:27.232306 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 16:40:27.484315 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 16 16:40:27.484315 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 16:40:27.488194 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 16:40:27.821326 systemd-networkd[863]: eth0: Gained IPv6LL May 16 16:40:27.825633 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 16:40:27.900148 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 16:40:27.902208 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 16:40:27.902208 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 16:40:27.902208 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 16:40:27.902208 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 16:40:27.902208 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 16:40:27.902208 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 16:40:27.902208 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 16:40:27.902208 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 16:40:27.946711 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 16:40:27.948760 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 16:40:27.950784 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 16:40:28.000919 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 16:40:28.000919 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 16:40:28.005801 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 16 16:40:28.628942 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 16:40:29.012256 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 16:40:29.012256 ignition[1038]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 16:40:29.016155 ignition[1038]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 16:40:29.018508 ignition[1038]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 16:40:29.018508 ignition[1038]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 16:40:29.018508 ignition[1038]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 16:40:29.023446 ignition[1038]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 16:40:29.023446 ignition[1038]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 16:40:29.023446 ignition[1038]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 16:40:29.023446 ignition[1038]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 16:40:29.038679 ignition[1038]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 16:40:29.043156 ignition[1038]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 16:40:29.044873 ignition[1038]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 16:40:29.044873 ignition[1038]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 16:40:29.044873 ignition[1038]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 16:40:29.044873 ignition[1038]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 16:40:29.044873 ignition[1038]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 16:40:29.044873 ignition[1038]: INFO : files: files passed May 16 16:40:29.044873 ignition[1038]: INFO : Ignition finished successfully May 16 16:40:29.049267 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 16:40:29.051667 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 16:40:29.054225 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 16:40:29.074333 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 16:40:29.074463 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 16:40:29.077539 initrd-setup-root-after-ignition[1067]: grep: /sysroot/oem/oem-release: No such file or directory May 16 16:40:29.081633 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 16:40:29.081633 initrd-setup-root-after-ignition[1069]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 16:40:29.084822 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 16:40:29.088004 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 16:40:29.088293 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 16:40:29.092624 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 16:40:29.141529 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 16:40:29.141664 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 16:40:29.142868 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 16:40:29.146047 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 16:40:29.147236 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 16:40:29.148032 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 16:40:29.174926 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 16:40:29.176476 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 16:40:29.199831 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 16:40:29.199997 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:40:29.203364 systemd[1]: Stopped target timers.target - Timer Units. May 16 16:40:29.205385 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 16:40:29.205505 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 16:40:29.209510 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 16:40:29.209641 systemd[1]: Stopped target basic.target - Basic System. May 16 16:40:29.211551 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 16:40:29.211879 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 16:40:29.212408 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 16:40:29.212726 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 16:40:29.213082 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 16:40:29.213582 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 16:40:29.213917 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 16:40:29.214428 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 16:40:29.214754 systemd[1]: Stopped target swap.target - Swaps. May 16 16:40:29.215066 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 16:40:29.215193 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 16:40:29.215919 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 16:40:29.216449 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:40:29.216741 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 16:40:29.260162 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:40:29.262300 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 16:40:29.262410 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 16:40:29.263988 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 16:40:29.264105 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 16:40:29.266542 systemd[1]: Stopped target paths.target - Path Units. May 16 16:40:29.266788 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 16:40:29.274201 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:40:29.274361 systemd[1]: Stopped target slices.target - Slice Units. May 16 16:40:29.276929 systemd[1]: Stopped target sockets.target - Socket Units. May 16 16:40:29.278609 systemd[1]: iscsid.socket: Deactivated successfully. May 16 16:40:29.278697 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 16:40:29.280340 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 16:40:29.280419 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 16:40:29.282057 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 16:40:29.282194 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 16:40:29.283868 systemd[1]: ignition-files.service: Deactivated successfully. May 16 16:40:29.283969 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 16:40:29.288698 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 16:40:29.290776 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 16:40:29.290893 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:40:29.294591 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 16:40:29.295848 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 16:40:29.295998 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:40:29.298096 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 16:40:29.298212 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 16:40:29.305473 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 16:40:29.305606 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 16:40:29.314870 ignition[1093]: INFO : Ignition 2.21.0 May 16 16:40:29.314870 ignition[1093]: INFO : Stage: umount May 16 16:40:29.316590 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:40:29.316590 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:40:29.316590 ignition[1093]: INFO : umount: umount passed May 16 16:40:29.316590 ignition[1093]: INFO : Ignition finished successfully May 16 16:40:29.319227 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 16:40:29.319348 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 16:40:29.322360 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 16:40:29.322826 systemd[1]: Stopped target network.target - Network. May 16 16:40:29.325469 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 16:40:29.325520 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 16:40:29.326527 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 16:40:29.326583 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 16:40:29.328777 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 16:40:29.328829 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 16:40:29.329162 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 16:40:29.329206 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 16:40:29.329755 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 16:40:29.330003 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 16:40:29.339206 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 16:40:29.339358 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 16:40:29.344801 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 16:40:29.345085 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 16:40:29.345176 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:40:29.349815 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 16:40:29.350842 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 16:40:29.350972 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 16:40:29.354624 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 16:40:29.354840 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 16:40:29.355837 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 16:40:29.355886 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 16:40:29.360711 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 16:40:29.363578 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 16:40:29.363644 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 16:40:29.365939 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 16:40:29.366012 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 16:40:29.368314 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 16:40:29.368367 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 16:40:29.372266 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:40:29.374622 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 16:40:29.393996 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 16:40:29.394206 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:40:29.395287 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 16:40:29.395333 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 16:40:29.397422 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 16:40:29.397456 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:40:29.399378 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 16:40:29.399426 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 16:40:29.400079 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 16:40:29.400120 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 16:40:29.400898 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 16:40:29.400940 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 16:40:29.409537 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 16:40:29.410351 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 16:40:29.410401 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:40:29.414827 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 16:40:29.414875 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:40:29.419168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:40:29.419212 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:40:29.423056 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 16:40:29.439283 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 16:40:29.448016 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 16:40:29.448148 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 16:40:29.548254 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 16:40:29.548382 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 16:40:29.550423 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 16:40:29.551189 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 16:40:29.551244 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 16:40:29.556182 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 16:40:29.575137 systemd[1]: Switching root. May 16 16:40:29.612511 systemd-journald[220]: Journal stopped May 16 16:40:31.034520 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 16 16:40:31.034586 kernel: SELinux: policy capability network_peer_controls=1 May 16 16:40:31.034610 kernel: SELinux: policy capability open_perms=1 May 16 16:40:31.034624 kernel: SELinux: policy capability extended_socket_class=1 May 16 16:40:31.034640 kernel: SELinux: policy capability always_check_network=0 May 16 16:40:31.034652 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 16:40:31.034663 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 16:40:31.034674 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 16:40:31.034685 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 16:40:31.034696 kernel: SELinux: policy capability userspace_initial_context=0 May 16 16:40:31.034708 kernel: audit: type=1403 audit(1747413630.139:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 16:40:31.034720 systemd[1]: Successfully loaded SELinux policy in 53.135ms. May 16 16:40:31.034742 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.688ms. May 16 16:40:31.034756 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 16:40:31.034770 systemd[1]: Detected virtualization kvm. May 16 16:40:31.034782 systemd[1]: Detected architecture x86-64. May 16 16:40:31.034793 systemd[1]: Detected first boot. May 16 16:40:31.034806 systemd[1]: Initializing machine ID from VM UUID. May 16 16:40:31.034818 zram_generator::config[1138]: No configuration found. May 16 16:40:31.034831 kernel: Guest personality initialized and is inactive May 16 16:40:31.034844 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 16:40:31.034855 kernel: Initialized host personality May 16 16:40:31.034866 kernel: NET: Registered PF_VSOCK protocol family May 16 16:40:31.034883 systemd[1]: Populated /etc with preset unit settings. May 16 16:40:31.034896 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 16:40:31.034908 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 16:40:31.034920 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 16:40:31.034932 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 16:40:31.034948 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 16:40:31.034962 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 16:40:31.034974 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 16:40:31.034993 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 16:40:31.035006 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 16:40:31.035018 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 16:40:31.035030 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 16:40:31.035043 systemd[1]: Created slice user.slice - User and Session Slice. May 16 16:40:31.035056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:40:31.035070 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:40:31.035083 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 16:40:31.035094 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 16:40:31.035107 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 16:40:31.035119 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 16:40:31.035144 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 16:40:31.035157 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:40:31.035169 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 16:40:31.035188 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 16:40:31.035201 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 16:40:31.035213 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 16:40:31.035225 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 16:40:31.035237 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:40:31.035249 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 16:40:31.035261 systemd[1]: Reached target slices.target - Slice Units. May 16 16:40:31.035273 systemd[1]: Reached target swap.target - Swaps. May 16 16:40:31.035286 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 16:40:31.035299 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 16:40:31.035311 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 16:40:31.035324 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 16:40:31.035337 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 16:40:31.035348 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:40:31.035360 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 16:40:31.035372 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 16:40:31.035384 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 16:40:31.035396 systemd[1]: Mounting media.mount - External Media Directory... May 16 16:40:31.035410 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:40:31.035422 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 16:40:31.035435 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 16:40:31.035447 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 16:40:31.035459 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 16:40:31.035471 systemd[1]: Reached target machines.target - Containers. May 16 16:40:31.035483 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 16:40:31.035496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:40:31.035509 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 16:40:31.035522 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 16:40:31.035534 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:40:31.035546 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 16:40:31.035557 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:40:31.035569 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 16:40:31.035582 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:40:31.035595 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 16:40:31.035607 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 16:40:31.035621 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 16:40:31.035633 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 16:40:31.035645 systemd[1]: Stopped systemd-fsck-usr.service. May 16 16:40:31.035657 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:40:31.035669 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 16:40:31.035682 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 16:40:31.035694 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 16:40:31.035706 kernel: ACPI: bus type drm_connector registered May 16 16:40:31.035717 kernel: fuse: init (API version 7.41) May 16 16:40:31.035731 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 16:40:31.035744 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 16:40:31.035756 kernel: loop: module loaded May 16 16:40:31.035767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 16:40:31.035779 systemd[1]: verity-setup.service: Deactivated successfully. May 16 16:40:31.035794 systemd[1]: Stopped verity-setup.service. May 16 16:40:31.035806 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:40:31.035818 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 16:40:31.035830 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 16:40:31.035846 systemd[1]: Mounted media.mount - External Media Directory. May 16 16:40:31.035878 systemd-journald[1213]: Collecting audit messages is disabled. May 16 16:40:31.035900 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 16:40:31.035912 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 16:40:31.035925 systemd-journald[1213]: Journal started May 16 16:40:31.035947 systemd-journald[1213]: Runtime Journal (/run/log/journal/b3b61d7399e04f4582865cb5aa2924a8) is 6M, max 48.5M, 42.4M free. May 16 16:40:30.775181 systemd[1]: Queued start job for default target multi-user.target. May 16 16:40:30.802068 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 16:40:30.802565 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 16:40:31.039149 systemd[1]: Started systemd-journald.service - Journal Service. May 16 16:40:31.039981 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 16:40:31.041273 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 16:40:31.042843 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:40:31.044443 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 16:40:31.044671 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 16:40:31.046141 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:40:31.046476 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:40:31.047879 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 16:40:31.048098 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 16:40:31.049483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:40:31.049686 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:40:31.051571 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 16:40:31.051790 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 16:40:31.053154 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:40:31.053356 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:40:31.054767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 16:40:31.056216 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:40:31.057752 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 16:40:31.059328 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 16:40:31.074538 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 16:40:31.077299 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 16:40:31.081350 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 16:40:31.082710 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 16:40:31.082750 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 16:40:31.084833 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 16:40:31.089043 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 16:40:31.090724 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:40:31.092110 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 16:40:31.096604 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 16:40:31.098034 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 16:40:31.104145 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 16:40:31.105279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 16:40:31.108229 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:40:31.110715 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 16:40:31.122234 systemd-journald[1213]: Time spent on flushing to /var/log/journal/b3b61d7399e04f4582865cb5aa2924a8 is 14.156ms for 1067 entries. May 16 16:40:31.122234 systemd-journald[1213]: System Journal (/var/log/journal/b3b61d7399e04f4582865cb5aa2924a8) is 8M, max 195.6M, 187.6M free. May 16 16:40:31.152804 systemd-journald[1213]: Received client request to flush runtime journal. May 16 16:40:31.152865 kernel: loop0: detected capacity change from 0 to 146240 May 16 16:40:31.114332 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 16:40:31.118160 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:40:31.120476 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 16:40:31.121883 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 16:40:31.130951 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 16:40:31.132427 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 16:40:31.136244 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 16:40:31.150353 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:40:31.154463 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 16:40:31.164829 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 16:40:31.176293 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 16:40:31.180179 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 16:40:31.180655 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 16:40:31.198164 kernel: loop1: detected capacity change from 0 to 113872 May 16 16:40:31.215105 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. May 16 16:40:31.215140 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. May 16 16:40:31.223695 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:40:31.226160 kernel: loop2: detected capacity change from 0 to 221472 May 16 16:40:31.251237 kernel: loop3: detected capacity change from 0 to 146240 May 16 16:40:31.264168 kernel: loop4: detected capacity change from 0 to 113872 May 16 16:40:31.275159 kernel: loop5: detected capacity change from 0 to 221472 May 16 16:40:31.283889 (sd-merge)[1280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 16:40:31.284634 (sd-merge)[1280]: Merged extensions into '/usr'. May 16 16:40:31.289020 systemd[1]: Reload requested from client PID 1257 ('systemd-sysext') (unit systemd-sysext.service)... May 16 16:40:31.289119 systemd[1]: Reloading... May 16 16:40:31.357164 zram_generator::config[1309]: No configuration found. May 16 16:40:31.424302 ldconfig[1252]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 16:40:31.462232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:40:31.541805 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 16:40:31.542106 systemd[1]: Reloading finished in 252 ms. May 16 16:40:31.570569 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 16:40:31.572276 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 16:40:31.586763 systemd[1]: Starting ensure-sysext.service... May 16 16:40:31.588933 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 16:40:31.598391 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... May 16 16:40:31.598407 systemd[1]: Reloading... May 16 16:40:31.609236 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 16:40:31.609273 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 16:40:31.609613 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 16:40:31.609864 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 16:40:31.610742 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 16:40:31.610998 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. May 16 16:40:31.611064 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. May 16 16:40:31.644367 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. May 16 16:40:31.644382 systemd-tmpfiles[1344]: Skipping /boot May 16 16:40:31.658163 zram_generator::config[1374]: No configuration found. May 16 16:40:31.662494 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. May 16 16:40:31.662611 systemd-tmpfiles[1344]: Skipping /boot May 16 16:40:31.747659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:40:31.836516 systemd[1]: Reloading finished in 237 ms. May 16 16:40:31.858743 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 16:40:31.881855 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:40:31.891202 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:40:31.893872 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 16:40:31.911071 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 16:40:31.915092 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 16:40:31.920304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:40:31.924344 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 16:40:31.928705 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:40:31.929562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:40:31.931284 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:40:31.934705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:40:31.937571 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:40:31.938755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:40:31.938870 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:40:31.946219 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 16:40:31.947429 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:40:31.949720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:40:31.950472 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:40:31.952285 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:40:31.952949 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:40:31.954637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:40:31.954850 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:40:31.958960 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 16:40:31.966443 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:40:31.966738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:40:31.967779 systemd-udevd[1417]: Using default interface naming scheme 'v255'. May 16 16:40:31.969364 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:40:31.971647 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:40:31.975713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:40:31.976930 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:40:31.977108 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:40:31.980582 augenrules[1446]: No rules May 16 16:40:31.985088 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 16:40:31.986332 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:40:31.989636 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:40:31.990259 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:40:31.993604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:40:31.993815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:40:31.995513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:40:31.995812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:40:31.997855 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:40:31.998063 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:40:31.999920 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 16:40:32.001920 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 16:40:32.003712 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 16:40:32.006337 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:40:32.016390 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 16:40:32.034224 systemd[1]: Finished ensure-sysext.service. May 16 16:40:32.044602 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:40:32.045884 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:40:32.047433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:40:32.050280 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:40:32.055810 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 16:40:32.063431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:40:32.069285 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:40:32.070557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:40:32.070606 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:40:32.073255 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 16:40:32.078340 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 16:40:32.079520 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 16:40:32.079545 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:40:32.080139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:40:32.081306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:40:32.082791 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 16:40:32.082999 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 16:40:32.084422 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:40:32.084618 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:40:32.092706 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 16:40:32.093098 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 16:40:32.098951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:40:32.099385 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:40:32.102554 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 16:40:32.111807 augenrules[1492]: /sbin/augenrules: No change May 16 16:40:32.128211 augenrules[1527]: No rules May 16 16:40:32.148061 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:40:32.148377 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:40:32.159149 kernel: mousedev: PS/2 mouse device common for all mice May 16 16:40:32.167326 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 16:40:32.170076 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 16:40:32.177081 systemd-resolved[1413]: Positive Trust Anchors: May 16 16:40:32.177099 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 16:40:32.177162 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 16:40:32.180630 systemd-resolved[1413]: Defaulting to hostname 'linux'. May 16 16:40:32.182420 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 16:40:32.184158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 16:40:32.186216 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 16 16:40:32.189299 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 16:40:32.191149 kernel: ACPI: button: Power Button [PWRF] May 16 16:40:32.220988 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 16 16:40:32.221743 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 16:40:32.222070 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 16:40:32.275526 systemd-networkd[1497]: lo: Link UP May 16 16:40:32.275859 systemd-networkd[1497]: lo: Gained carrier May 16 16:40:32.279139 systemd-networkd[1497]: Enumeration completed May 16 16:40:32.279770 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:40:32.279835 systemd-networkd[1497]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 16:40:32.282489 systemd-networkd[1497]: eth0: Link UP May 16 16:40:32.282632 systemd-networkd[1497]: eth0: Gained carrier May 16 16:40:32.282646 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:40:32.288792 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 16:40:32.290074 systemd[1]: Reached target network.target - Network. May 16 16:40:32.292397 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 16:40:32.294528 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 16:40:32.295811 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 16:40:32.297086 systemd[1]: Reached target sysinit.target - System Initialization. May 16 16:40:32.298275 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 16:40:32.299561 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 16:40:32.300829 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 16 16:40:32.302019 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 16:40:32.302100 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 16:40:32.302120 systemd[1]: Reached target paths.target - Path Units. May 16 16:40:32.304232 systemd[1]: Reached target time-set.target - System Time Set. May 16 16:40:32.305427 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 16:40:32.306636 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 16:40:32.308173 systemd[1]: Reached target timers.target - Timer Units. May 16 16:40:32.310076 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 16:40:32.312421 systemd-networkd[1497]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 16:40:32.314309 systemd-timesyncd[1498]: Network configuration changed, trying to establish connection. May 16 16:40:33.791980 systemd-timesyncd[1498]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 16:40:33.792027 systemd-timesyncd[1498]: Initial clock synchronization to Fri 2025-05-16 16:40:33.791892 UTC. May 16 16:40:33.792069 systemd-resolved[1413]: Clock change detected. Flushing caches. May 16 16:40:33.798999 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 16:40:33.803499 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 16:40:33.804961 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 16:40:33.806266 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 16:40:33.818413 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 16:40:33.819845 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 16:40:33.821823 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 16:40:33.830165 systemd[1]: Reached target sockets.target - Socket Units. May 16 16:40:33.831321 systemd[1]: Reached target basic.target - Basic System. May 16 16:40:33.832486 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 16:40:33.832618 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 16:40:33.835655 systemd[1]: Starting containerd.service - containerd container runtime... May 16 16:40:33.838243 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 16:40:33.850706 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 16:40:33.854571 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 16:40:33.856906 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 16:40:33.858013 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 16:40:33.859720 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 16 16:40:33.862432 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 16:40:33.864960 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 16:40:33.870001 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 16:40:33.873089 oslogin_cache_refresh[1566]: Refreshing passwd entry cache May 16 16:40:33.877687 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache May 16 16:40:33.877834 jq[1564]: false May 16 16:40:33.873551 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 16:40:33.879933 oslogin_cache_refresh[1566]: Failure getting users, quitting May 16 16:40:33.881575 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting May 16 16:40:33.881575 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 16:40:33.881575 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache May 16 16:40:33.879946 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 16:40:33.879982 oslogin_cache_refresh[1566]: Refreshing group entry cache May 16 16:40:33.884859 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 16:40:33.887005 oslogin_cache_refresh[1566]: Failure getting groups, quitting May 16 16:40:33.887243 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting May 16 16:40:33.887243 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 16:40:33.887015 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 16:40:33.888842 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:40:33.891038 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 16:40:33.892091 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 16:40:33.892979 systemd[1]: Starting update-engine.service - Update Engine... May 16 16:40:33.901482 extend-filesystems[1565]: Found loop3 May 16 16:40:33.901482 extend-filesystems[1565]: Found loop4 May 16 16:40:33.901482 extend-filesystems[1565]: Found loop5 May 16 16:40:33.901482 extend-filesystems[1565]: Found sr0 May 16 16:40:33.901482 extend-filesystems[1565]: Found vda May 16 16:40:33.901482 extend-filesystems[1565]: Found vda1 May 16 16:40:33.901482 extend-filesystems[1565]: Found vda2 May 16 16:40:33.901482 extend-filesystems[1565]: Found vda3 May 16 16:40:33.901482 extend-filesystems[1565]: Found usr May 16 16:40:33.901482 extend-filesystems[1565]: Found vda4 May 16 16:40:33.901482 extend-filesystems[1565]: Found vda6 May 16 16:40:33.901482 extend-filesystems[1565]: Found vda7 May 16 16:40:33.901482 extend-filesystems[1565]: Found vda9 May 16 16:40:33.901482 extend-filesystems[1565]: Checking size of /dev/vda9 May 16 16:40:33.898621 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 16:40:33.918446 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 16:40:33.928204 update_engine[1579]: I20250516 16:40:33.926939 1579 main.cc:92] Flatcar Update Engine starting May 16 16:40:33.922364 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 16:40:33.928733 jq[1581]: true May 16 16:40:33.927261 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 16:40:33.933697 extend-filesystems[1565]: Resized partition /dev/vda9 May 16 16:40:33.936450 extend-filesystems[1591]: resize2fs 1.47.2 (1-Jan-2025) May 16 16:40:33.937255 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 16:40:33.941417 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 16 16:40:33.941712 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 16 16:40:33.945108 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 16:40:33.942712 systemd[1]: motdgen.service: Deactivated successfully. May 16 16:40:33.944210 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 16:40:33.945944 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 16:40:33.946256 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 16:40:33.961827 kernel: kvm_amd: TSC scaling supported May 16 16:40:33.961891 kernel: kvm_amd: Nested Virtualization enabled May 16 16:40:33.961905 kernel: kvm_amd: Nested Paging enabled May 16 16:40:33.961917 kernel: kvm_amd: LBR virtualization supported May 16 16:40:33.964561 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 16:40:33.964644 kernel: kvm_amd: Virtual GIF supported May 16 16:40:33.966101 (ntainerd)[1594]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 16:40:33.975717 jq[1593]: true May 16 16:40:33.999643 tar[1592]: linux-amd64/helm May 16 16:40:34.006395 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 16:40:34.007847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:40:34.008336 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:40:34.016023 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 16:40:34.031884 systemd-logind[1572]: Watching system buttons on /dev/input/event2 (Power Button) May 16 16:40:34.032284 systemd-logind[1572]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 16:40:34.032779 extend-filesystems[1591]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 16:40:34.032779 extend-filesystems[1591]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 16:40:34.032779 extend-filesystems[1591]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 16:40:34.038456 extend-filesystems[1565]: Resized filesystem in /dev/vda9 May 16 16:40:34.038102 dbus-daemon[1562]: [system] SELinux support is enabled May 16 16:40:34.033481 systemd-logind[1572]: New seat seat0. May 16 16:40:34.034432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:40:34.038439 systemd[1]: Started systemd-logind.service - User Login Management. May 16 16:40:34.038605 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 16:40:34.045334 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 16:40:34.052029 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 16:40:34.055460 update_engine[1579]: I20250516 16:40:34.055335 1579 update_check_scheduler.cc:74] Next update check in 5m4s May 16 16:40:34.057252 dbus-daemon[1562]: [system] Successfully activated service 'org.freedesktop.systemd1' May 16 16:40:34.057952 systemd[1]: Started update-engine.service - Update Engine. May 16 16:40:34.061348 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 16:40:34.061516 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 16:40:34.062845 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 16:40:34.062965 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 16:40:34.066592 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 16:40:34.072525 kernel: EDAC MC: Ver: 3.0.0 May 16 16:40:34.092779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:40:34.103674 bash[1631]: Updated "/home/core/.ssh/authorized_keys" May 16 16:40:34.108448 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 16:40:34.109791 locksmithd[1629]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 16:40:34.111121 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 16:40:34.195850 containerd[1594]: time="2025-05-16T16:40:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 16:40:34.197286 containerd[1594]: time="2025-05-16T16:40:34.197261812Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 16:40:34.205208 containerd[1594]: time="2025-05-16T16:40:34.205154035Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.119µs" May 16 16:40:34.205208 containerd[1594]: time="2025-05-16T16:40:34.205199621Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 16:40:34.205267 containerd[1594]: time="2025-05-16T16:40:34.205218556Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 16:40:34.205472 containerd[1594]: time="2025-05-16T16:40:34.205442586Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 16:40:34.205472 containerd[1594]: time="2025-05-16T16:40:34.205461903Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 16:40:34.205527 containerd[1594]: time="2025-05-16T16:40:34.205486238Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 16:40:34.205580 containerd[1594]: time="2025-05-16T16:40:34.205558955Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 16:40:34.205580 containerd[1594]: time="2025-05-16T16:40:34.205573702Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 16:40:34.205895 containerd[1594]: time="2025-05-16T16:40:34.205864297Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 16:40:34.205895 containerd[1594]: time="2025-05-16T16:40:34.205882812Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 16:40:34.205895 containerd[1594]: time="2025-05-16T16:40:34.205892911Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 16:40:34.205959 containerd[1594]: time="2025-05-16T16:40:34.205900615Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 16:40:34.206007 containerd[1594]: time="2025-05-16T16:40:34.205987448Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 16:40:34.206345 containerd[1594]: time="2025-05-16T16:40:34.206310134Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 16:40:34.206467 containerd[1594]: time="2025-05-16T16:40:34.206365447Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 16:40:34.206467 containerd[1594]: time="2025-05-16T16:40:34.206414239Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 16:40:34.206513 containerd[1594]: time="2025-05-16T16:40:34.206470685Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 16:40:34.206997 containerd[1594]: time="2025-05-16T16:40:34.206972586Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 16:40:34.207071 containerd[1594]: time="2025-05-16T16:40:34.207045603Z" level=info msg="metadata content store policy set" policy=shared May 16 16:40:34.212658 containerd[1594]: time="2025-05-16T16:40:34.212618827Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 16:40:34.212708 containerd[1594]: time="2025-05-16T16:40:34.212663461Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 16:40:34.212708 containerd[1594]: time="2025-05-16T16:40:34.212679040Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 16:40:34.212708 containerd[1594]: time="2025-05-16T16:40:34.212689740Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 16:40:34.212708 containerd[1594]: time="2025-05-16T16:40:34.212701783Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 16:40:34.212780 containerd[1594]: time="2025-05-16T16:40:34.212712673Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 16:40:34.212780 containerd[1594]: time="2025-05-16T16:40:34.212726228Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 16:40:34.212780 containerd[1594]: time="2025-05-16T16:40:34.212737479Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 16:40:34.212780 containerd[1594]: time="2025-05-16T16:40:34.212757998Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 16:40:34.212780 containerd[1594]: time="2025-05-16T16:40:34.212767887Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 16:40:34.212780 containerd[1594]: time="2025-05-16T16:40:34.212776392Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 16:40:34.212885 containerd[1594]: time="2025-05-16T16:40:34.212793114Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 16:40:34.212935 containerd[1594]: time="2025-05-16T16:40:34.212903631Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 16:40:34.212935 containerd[1594]: time="2025-05-16T16:40:34.212927756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 16:40:34.212978 containerd[1594]: time="2025-05-16T16:40:34.212944828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 16:40:34.212978 containerd[1594]: time="2025-05-16T16:40:34.212954757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 16:40:34.212978 containerd[1594]: time="2025-05-16T16:40:34.212964044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 16:40:34.212978 containerd[1594]: time="2025-05-16T16:40:34.212975165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 16:40:34.213054 containerd[1594]: time="2025-05-16T16:40:34.212985996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 16:40:34.213054 containerd[1594]: time="2025-05-16T16:40:34.212996355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 16:40:34.213054 containerd[1594]: time="2025-05-16T16:40:34.213010071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 16:40:34.213054 containerd[1594]: time="2025-05-16T16:40:34.213019569Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 16:40:34.213054 containerd[1594]: time="2025-05-16T16:40:34.213028295Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 16:40:34.213149 containerd[1594]: time="2025-05-16T16:40:34.213088668Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 16:40:34.213149 containerd[1594]: time="2025-05-16T16:40:34.213101522Z" level=info msg="Start snapshots syncer" May 16 16:40:34.213149 containerd[1594]: time="2025-05-16T16:40:34.213123003Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 16:40:34.213409 containerd[1594]: time="2025-05-16T16:40:34.213343777Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 16:40:34.213513 containerd[1594]: time="2025-05-16T16:40:34.213421402Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 16:40:34.214319 containerd[1594]: time="2025-05-16T16:40:34.214294189Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 16:40:34.214451 containerd[1594]: time="2025-05-16T16:40:34.214421168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 16:40:34.214451 containerd[1594]: time="2025-05-16T16:40:34.214445744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 16:40:34.214508 containerd[1594]: time="2025-05-16T16:40:34.214456955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 16:40:34.214508 containerd[1594]: time="2025-05-16T16:40:34.214468767Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 16:40:34.214508 containerd[1594]: time="2025-05-16T16:40:34.214480028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 16:40:34.214508 containerd[1594]: time="2025-05-16T16:40:34.214490478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 16:40:34.214508 containerd[1594]: time="2025-05-16T16:40:34.214502410Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 16:40:34.214599 containerd[1594]: time="2025-05-16T16:40:34.214523660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 16:40:34.214599 containerd[1594]: time="2025-05-16T16:40:34.214534019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 16:40:34.214599 containerd[1594]: time="2025-05-16T16:40:34.214543277Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 16:40:34.215210 containerd[1594]: time="2025-05-16T16:40:34.215146188Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 16:40:34.215210 containerd[1594]: time="2025-05-16T16:40:34.215166726Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 16:40:34.215210 containerd[1594]: time="2025-05-16T16:40:34.215175302Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 16:40:34.215210 containerd[1594]: time="2025-05-16T16:40:34.215184199Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 16:40:34.215210 containerd[1594]: time="2025-05-16T16:40:34.215202814Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 16:40:34.215210 containerd[1594]: time="2025-05-16T16:40:34.215212863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 16:40:34.215335 containerd[1594]: time="2025-05-16T16:40:34.215223973Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 16:40:34.215335 containerd[1594]: time="2025-05-16T16:40:34.215242839Z" level=info msg="runtime interface created" May 16 16:40:34.215335 containerd[1594]: time="2025-05-16T16:40:34.215247938Z" level=info msg="created NRI interface" May 16 16:40:34.215335 containerd[1594]: time="2025-05-16T16:40:34.215260011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 16:40:34.215335 containerd[1594]: time="2025-05-16T16:40:34.215269399Z" level=info msg="Connect containerd service" May 16 16:40:34.215335 containerd[1594]: time="2025-05-16T16:40:34.215289907Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 16:40:34.216000 containerd[1594]: time="2025-05-16T16:40:34.215966526Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 16:40:34.262961 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 16:40:34.286833 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 16:40:34.290495 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 16:40:34.302738 containerd[1594]: time="2025-05-16T16:40:34.302697930Z" level=info msg="Start subscribing containerd event" May 16 16:40:34.302879 containerd[1594]: time="2025-05-16T16:40:34.302853982Z" level=info msg="Start recovering state" May 16 16:40:34.303098 containerd[1594]: time="2025-05-16T16:40:34.302716775Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 16:40:34.303226 containerd[1594]: time="2025-05-16T16:40:34.303211483Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 16:40:34.303385 containerd[1594]: time="2025-05-16T16:40:34.303087811Z" level=info msg="Start event monitor" May 16 16:40:34.304414 containerd[1594]: time="2025-05-16T16:40:34.303364700Z" level=info msg="Start cni network conf syncer for default" May 16 16:40:34.304629 containerd[1594]: time="2025-05-16T16:40:34.304453763Z" level=info msg="Start streaming server" May 16 16:40:34.304629 containerd[1594]: time="2025-05-16T16:40:34.304465715Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 16:40:34.304629 containerd[1594]: time="2025-05-16T16:40:34.304472498Z" level=info msg="runtime interface starting up..." May 16 16:40:34.304629 containerd[1594]: time="2025-05-16T16:40:34.304478720Z" level=info msg="starting plugins..." May 16 16:40:34.304629 containerd[1594]: time="2025-05-16T16:40:34.304495131Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 16:40:34.304666 systemd[1]: Started containerd.service - containerd container runtime. May 16 16:40:34.305006 containerd[1594]: time="2025-05-16T16:40:34.304990159Z" level=info msg="containerd successfully booted in 0.109913s" May 16 16:40:34.310524 systemd[1]: issuegen.service: Deactivated successfully. May 16 16:40:34.310791 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 16:40:34.313775 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 16:40:34.334970 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 16:40:34.338503 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 16:40:34.340575 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 16:40:34.342074 systemd[1]: Reached target getty.target - Login Prompts. May 16 16:40:34.441497 tar[1592]: linux-amd64/LICENSE May 16 16:40:34.441624 tar[1592]: linux-amd64/README.md May 16 16:40:34.463774 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 16:40:35.185591 systemd-networkd[1497]: eth0: Gained IPv6LL May 16 16:40:35.188675 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 16:40:35.190492 systemd[1]: Reached target network-online.target - Network is Online. May 16 16:40:35.193111 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 16:40:35.195480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:40:35.197761 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 16:40:35.223754 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 16:40:35.225450 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 16:40:35.225761 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 16:40:35.229361 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 16:40:35.928975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:40:35.931012 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 16:40:35.933045 systemd[1]: Startup finished in 2.817s (kernel) + 6.486s (initrd) + 4.369s (userspace) = 13.673s. May 16 16:40:35.944759 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:40:36.349588 kubelet[1707]: E0516 16:40:36.349512 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:40:36.354035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:40:36.354243 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:40:36.354674 systemd[1]: kubelet.service: Consumed 984ms CPU time, 264.5M memory peak. May 16 16:40:38.676715 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 16:40:38.678060 systemd[1]: Started sshd@0-10.0.0.76:22-10.0.0.1:47386.service - OpenSSH per-connection server daemon (10.0.0.1:47386). May 16 16:40:38.748449 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 47386 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:40:38.750315 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:40:38.757099 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 16:40:38.758232 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 16:40:38.765549 systemd-logind[1572]: New session 1 of user core. May 16 16:40:38.782469 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 16:40:38.785593 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 16:40:38.808296 (systemd)[1724]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 16:40:38.814622 systemd-logind[1572]: New session c1 of user core. May 16 16:40:38.956651 systemd[1724]: Queued start job for default target default.target. May 16 16:40:38.976627 systemd[1724]: Created slice app.slice - User Application Slice. May 16 16:40:38.976652 systemd[1724]: Reached target paths.target - Paths. May 16 16:40:38.976690 systemd[1724]: Reached target timers.target - Timers. May 16 16:40:38.978188 systemd[1724]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 16:40:38.989647 systemd[1724]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 16:40:38.989782 systemd[1724]: Reached target sockets.target - Sockets. May 16 16:40:38.989825 systemd[1724]: Reached target basic.target - Basic System. May 16 16:40:38.989865 systemd[1724]: Reached target default.target - Main User Target. May 16 16:40:38.989901 systemd[1724]: Startup finished in 168ms. May 16 16:40:38.990354 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 16:40:38.992152 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 16:40:39.057948 systemd[1]: Started sshd@1-10.0.0.76:22-10.0.0.1:47394.service - OpenSSH per-connection server daemon (10.0.0.1:47394). May 16 16:40:39.101822 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 47394 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:40:39.103142 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:40:39.107441 systemd-logind[1572]: New session 2 of user core. May 16 16:40:39.114494 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 16:40:39.170999 sshd[1737]: Connection closed by 10.0.0.1 port 47394 May 16 16:40:39.171406 sshd-session[1735]: pam_unix(sshd:session): session closed for user core May 16 16:40:39.187104 systemd[1]: sshd@1-10.0.0.76:22-10.0.0.1:47394.service: Deactivated successfully. May 16 16:40:39.189235 systemd[1]: session-2.scope: Deactivated successfully. May 16 16:40:39.190203 systemd-logind[1572]: Session 2 logged out. Waiting for processes to exit. May 16 16:40:39.193652 systemd[1]: Started sshd@2-10.0.0.76:22-10.0.0.1:47404.service - OpenSSH per-connection server daemon (10.0.0.1:47404). May 16 16:40:39.194446 systemd-logind[1572]: Removed session 2. May 16 16:40:39.241395 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 47404 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:40:39.242940 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:40:39.247924 systemd-logind[1572]: New session 3 of user core. May 16 16:40:39.262559 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 16:40:39.312392 sshd[1745]: Connection closed by 10.0.0.1 port 47404 May 16 16:40:39.312825 sshd-session[1743]: pam_unix(sshd:session): session closed for user core May 16 16:40:39.321134 systemd[1]: sshd@2-10.0.0.76:22-10.0.0.1:47404.service: Deactivated successfully. May 16 16:40:39.323123 systemd[1]: session-3.scope: Deactivated successfully. May 16 16:40:39.323830 systemd-logind[1572]: Session 3 logged out. Waiting for processes to exit. May 16 16:40:39.326862 systemd[1]: Started sshd@3-10.0.0.76:22-10.0.0.1:47406.service - OpenSSH per-connection server daemon (10.0.0.1:47406). May 16 16:40:39.327524 systemd-logind[1572]: Removed session 3. May 16 16:40:39.381798 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 47406 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:40:39.383396 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:40:39.387821 systemd-logind[1572]: New session 4 of user core. May 16 16:40:39.397592 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 16:40:39.450590 sshd[1753]: Connection closed by 10.0.0.1 port 47406 May 16 16:40:39.450868 sshd-session[1751]: pam_unix(sshd:session): session closed for user core May 16 16:40:39.468916 systemd[1]: sshd@3-10.0.0.76:22-10.0.0.1:47406.service: Deactivated successfully. May 16 16:40:39.471108 systemd[1]: session-4.scope: Deactivated successfully. May 16 16:40:39.472067 systemd-logind[1572]: Session 4 logged out. Waiting for processes to exit. May 16 16:40:39.475979 systemd[1]: Started sshd@4-10.0.0.76:22-10.0.0.1:47412.service - OpenSSH per-connection server daemon (10.0.0.1:47412). May 16 16:40:39.476645 systemd-logind[1572]: Removed session 4. May 16 16:40:39.528979 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 47412 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:40:39.530764 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:40:39.535735 systemd-logind[1572]: New session 5 of user core. May 16 16:40:39.545674 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 16:40:39.603075 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 16:40:39.603364 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:40:39.630347 sudo[1762]: pam_unix(sudo:session): session closed for user root May 16 16:40:39.632154 sshd[1761]: Connection closed by 10.0.0.1 port 47412 May 16 16:40:39.632522 sshd-session[1759]: pam_unix(sshd:session): session closed for user core May 16 16:40:39.643719 systemd[1]: sshd@4-10.0.0.76:22-10.0.0.1:47412.service: Deactivated successfully. May 16 16:40:39.645690 systemd[1]: session-5.scope: Deactivated successfully. May 16 16:40:39.646491 systemd-logind[1572]: Session 5 logged out. Waiting for processes to exit. May 16 16:40:39.649942 systemd[1]: Started sshd@5-10.0.0.76:22-10.0.0.1:47416.service - OpenSSH per-connection server daemon (10.0.0.1:47416). May 16 16:40:39.650512 systemd-logind[1572]: Removed session 5. May 16 16:40:39.699901 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 47416 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:40:39.701504 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:40:39.705960 systemd-logind[1572]: New session 6 of user core. May 16 16:40:39.715513 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 16:40:39.768271 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 16:40:39.768648 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:40:39.840975 sudo[1773]: pam_unix(sudo:session): session closed for user root May 16 16:40:39.848384 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 16:40:39.848765 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:40:39.858644 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:40:39.911426 augenrules[1795]: No rules May 16 16:40:39.913032 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:40:39.913321 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:40:39.914582 sudo[1772]: pam_unix(sudo:session): session closed for user root May 16 16:40:39.916261 sshd[1771]: Connection closed by 10.0.0.1 port 47416 May 16 16:40:39.916769 sshd-session[1768]: pam_unix(sshd:session): session closed for user core May 16 16:40:39.929821 systemd[1]: sshd@5-10.0.0.76:22-10.0.0.1:47416.service: Deactivated successfully. May 16 16:40:39.931338 systemd[1]: session-6.scope: Deactivated successfully. May 16 16:40:39.932038 systemd-logind[1572]: Session 6 logged out. Waiting for processes to exit. May 16 16:40:39.934570 systemd[1]: Started sshd@6-10.0.0.76:22-10.0.0.1:47424.service - OpenSSH per-connection server daemon (10.0.0.1:47424). May 16 16:40:39.935179 systemd-logind[1572]: Removed session 6. May 16 16:40:39.983313 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 47424 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:40:39.985108 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:40:39.989935 systemd-logind[1572]: New session 7 of user core. May 16 16:40:40.010632 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 16:40:40.063264 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 16:40:40.063613 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:40:40.365163 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 16:40:40.382670 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 16:40:40.594763 dockerd[1827]: time="2025-05-16T16:40:40.594696911Z" level=info msg="Starting up" May 16 16:40:40.596218 dockerd[1827]: time="2025-05-16T16:40:40.596187136Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 16:40:41.162670 dockerd[1827]: time="2025-05-16T16:40:41.162612779Z" level=info msg="Loading containers: start." May 16 16:40:41.174393 kernel: Initializing XFRM netlink socket May 16 16:40:41.513360 systemd-networkd[1497]: docker0: Link UP May 16 16:40:41.518138 dockerd[1827]: time="2025-05-16T16:40:41.518102105Z" level=info msg="Loading containers: done." May 16 16:40:41.530965 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2377107150-merged.mount: Deactivated successfully. May 16 16:40:41.532699 dockerd[1827]: time="2025-05-16T16:40:41.532647158Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 16:40:41.532810 dockerd[1827]: time="2025-05-16T16:40:41.532731205Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 16:40:41.532842 dockerd[1827]: time="2025-05-16T16:40:41.532833247Z" level=info msg="Initializing buildkit" May 16 16:40:41.562091 dockerd[1827]: time="2025-05-16T16:40:41.562045050Z" level=info msg="Completed buildkit initialization" May 16 16:40:41.567942 dockerd[1827]: time="2025-05-16T16:40:41.567892217Z" level=info msg="Daemon has completed initialization" May 16 16:40:41.568078 dockerd[1827]: time="2025-05-16T16:40:41.568038141Z" level=info msg="API listen on /run/docker.sock" May 16 16:40:41.568395 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 16:40:42.343536 containerd[1594]: time="2025-05-16T16:40:42.343497832Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 16 16:40:43.147422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476013910.mount: Deactivated successfully. May 16 16:40:44.138394 containerd[1594]: time="2025-05-16T16:40:44.138305648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:44.139006 containerd[1594]: time="2025-05-16T16:40:44.138981125Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 16 16:40:44.140200 containerd[1594]: time="2025-05-16T16:40:44.140147994Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:44.144998 containerd[1594]: time="2025-05-16T16:40:44.144943018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:44.146140 containerd[1594]: time="2025-05-16T16:40:44.146079480Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 1.802544959s" May 16 16:40:44.146140 containerd[1594]: time="2025-05-16T16:40:44.146121749Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 16 16:40:44.146725 containerd[1594]: time="2025-05-16T16:40:44.146676850Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 16 16:40:45.503392 containerd[1594]: time="2025-05-16T16:40:45.503320175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:45.504192 containerd[1594]: time="2025-05-16T16:40:45.504140484Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 16 16:40:45.505636 containerd[1594]: time="2025-05-16T16:40:45.505598469Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:45.508891 containerd[1594]: time="2025-05-16T16:40:45.508830611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:45.509926 containerd[1594]: time="2025-05-16T16:40:45.509873147Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.36315555s" May 16 16:40:45.509967 containerd[1594]: time="2025-05-16T16:40:45.509922550Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 16 16:40:45.510485 containerd[1594]: time="2025-05-16T16:40:45.510427146Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 16 16:40:46.457101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 16:40:46.458609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:40:47.026975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:40:47.031461 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:40:47.138083 containerd[1594]: time="2025-05-16T16:40:47.138033651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:47.139867 containerd[1594]: time="2025-05-16T16:40:47.139815333Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 16 16:40:47.140317 containerd[1594]: time="2025-05-16T16:40:47.140271839Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:47.143309 containerd[1594]: time="2025-05-16T16:40:47.143271085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:47.144384 containerd[1594]: time="2025-05-16T16:40:47.144328187Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.6338683s" May 16 16:40:47.144384 containerd[1594]: time="2025-05-16T16:40:47.144384794Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 16 16:40:47.144809 containerd[1594]: time="2025-05-16T16:40:47.144761340Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 16:40:47.170549 kubelet[2107]: E0516 16:40:47.170495 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:40:47.176836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:40:47.177041 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:40:47.177403 systemd[1]: kubelet.service: Consumed 226ms CPU time, 111.4M memory peak. May 16 16:40:48.047361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount850297341.mount: Deactivated successfully. May 16 16:40:49.417622 containerd[1594]: time="2025-05-16T16:40:49.417557977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:49.431278 containerd[1594]: time="2025-05-16T16:40:49.431216266Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 16 16:40:49.480459 containerd[1594]: time="2025-05-16T16:40:49.480402054Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:49.516912 containerd[1594]: time="2025-05-16T16:40:49.516866070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:49.517622 containerd[1594]: time="2025-05-16T16:40:49.517569590Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 2.372775649s" May 16 16:40:49.517622 containerd[1594]: time="2025-05-16T16:40:49.517611138Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 16 16:40:49.518071 containerd[1594]: time="2025-05-16T16:40:49.518039091Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 16:40:50.294897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount527383005.mount: Deactivated successfully. May 16 16:40:50.959337 containerd[1594]: time="2025-05-16T16:40:50.959268094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:50.960200 containerd[1594]: time="2025-05-16T16:40:50.960175116Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 16 16:40:50.961384 containerd[1594]: time="2025-05-16T16:40:50.961334971Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:50.963852 containerd[1594]: time="2025-05-16T16:40:50.963825282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:50.964668 containerd[1594]: time="2025-05-16T16:40:50.964629370Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.446557719s" May 16 16:40:50.964668 containerd[1594]: time="2025-05-16T16:40:50.964661771Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 16:40:50.965216 containerd[1594]: time="2025-05-16T16:40:50.965189251Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 16:40:51.498579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2336918723.mount: Deactivated successfully. May 16 16:40:51.506073 containerd[1594]: time="2025-05-16T16:40:51.506014113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:40:51.506774 containerd[1594]: time="2025-05-16T16:40:51.506702795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 16:40:51.508050 containerd[1594]: time="2025-05-16T16:40:51.508014916Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:40:51.510004 containerd[1594]: time="2025-05-16T16:40:51.509975384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:40:51.510612 containerd[1594]: time="2025-05-16T16:40:51.510558467Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 545.346073ms" May 16 16:40:51.510612 containerd[1594]: time="2025-05-16T16:40:51.510599234Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 16:40:51.511086 containerd[1594]: time="2025-05-16T16:40:51.511043908Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 16 16:40:53.514514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1802240152.mount: Deactivated successfully. May 16 16:40:55.794591 containerd[1594]: time="2025-05-16T16:40:55.794518081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:55.845862 containerd[1594]: time="2025-05-16T16:40:55.845815360Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 16 16:40:55.867798 containerd[1594]: time="2025-05-16T16:40:55.867717312Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:55.887430 containerd[1594]: time="2025-05-16T16:40:55.887353925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:40:55.888430 containerd[1594]: time="2025-05-16T16:40:55.888353309Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.377265058s" May 16 16:40:55.888430 containerd[1594]: time="2025-05-16T16:40:55.888419874Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 16 16:40:57.207205 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 16:40:57.208978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:40:57.410178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:40:57.436699 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:40:57.484800 kubelet[2265]: E0516 16:40:57.484679 2265 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:40:57.488630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:40:57.488831 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:40:57.489194 systemd[1]: kubelet.service: Consumed 212ms CPU time, 110.7M memory peak. May 16 16:40:58.254878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:40:58.255124 systemd[1]: kubelet.service: Consumed 212ms CPU time, 110.7M memory peak. May 16 16:40:58.257457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:40:58.282613 systemd[1]: Reload requested from client PID 2281 ('systemctl') (unit session-7.scope)... May 16 16:40:58.282634 systemd[1]: Reloading... May 16 16:40:58.376413 zram_generator::config[2326]: No configuration found. May 16 16:40:59.034249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:40:59.162351 systemd[1]: Reloading finished in 879 ms. May 16 16:40:59.248057 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 16:40:59.248170 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 16:40:59.248510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:40:59.248567 systemd[1]: kubelet.service: Consumed 159ms CPU time, 98.2M memory peak. May 16 16:40:59.250229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:40:59.417202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:40:59.421322 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 16:40:59.454030 kubelet[2371]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:40:59.454030 kubelet[2371]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 16:40:59.454030 kubelet[2371]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:40:59.454409 kubelet[2371]: I0516 16:40:59.454074 2371 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 16:40:59.713357 kubelet[2371]: I0516 16:40:59.713324 2371 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 16:40:59.713357 kubelet[2371]: I0516 16:40:59.713347 2371 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 16:40:59.713592 kubelet[2371]: I0516 16:40:59.713574 2371 server.go:934] "Client rotation is on, will bootstrap in background" May 16 16:40:59.736957 kubelet[2371]: E0516 16:40:59.736922 2371 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" May 16 16:40:59.737504 kubelet[2371]: I0516 16:40:59.737476 2371 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:40:59.744390 kubelet[2371]: I0516 16:40:59.744348 2371 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 16:40:59.749832 kubelet[2371]: I0516 16:40:59.749796 2371 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 16:40:59.750314 kubelet[2371]: I0516 16:40:59.750292 2371 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 16:40:59.750475 kubelet[2371]: I0516 16:40:59.750442 2371 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 16:40:59.750623 kubelet[2371]: I0516 16:40:59.750465 2371 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 16:40:59.750623 kubelet[2371]: I0516 16:40:59.750622 2371 topology_manager.go:138] "Creating topology manager with none policy" May 16 16:40:59.750744 kubelet[2371]: I0516 16:40:59.750630 2371 container_manager_linux.go:300] "Creating device plugin manager" May 16 16:40:59.750744 kubelet[2371]: I0516 16:40:59.750743 2371 state_mem.go:36] "Initialized new in-memory state store" May 16 16:40:59.752538 kubelet[2371]: I0516 16:40:59.752510 2371 kubelet.go:408] "Attempting to sync node with API server" May 16 16:40:59.752538 kubelet[2371]: I0516 16:40:59.752533 2371 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 16:40:59.752617 kubelet[2371]: I0516 16:40:59.752562 2371 kubelet.go:314] "Adding apiserver pod source" May 16 16:40:59.752617 kubelet[2371]: I0516 16:40:59.752582 2371 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 16:40:59.755119 kubelet[2371]: I0516 16:40:59.755096 2371 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 16:40:59.755484 kubelet[2371]: I0516 16:40:59.755457 2371 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 16:40:59.756279 kubelet[2371]: W0516 16:40:59.755917 2371 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 16:40:59.757827 kubelet[2371]: W0516 16:40:59.757681 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 16 16:40:59.757827 kubelet[2371]: E0516 16:40:59.757736 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" May 16 16:40:59.758293 kubelet[2371]: I0516 16:40:59.758275 2371 server.go:1274] "Started kubelet" May 16 16:40:59.758984 kubelet[2371]: I0516 16:40:59.758806 2371 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 16:40:59.759313 kubelet[2371]: I0516 16:40:59.759165 2371 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 16:40:59.759313 kubelet[2371]: I0516 16:40:59.759217 2371 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 16:40:59.759551 kubelet[2371]: I0516 16:40:59.759405 2371 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 16:40:59.760609 kubelet[2371]: I0516 16:40:59.760114 2371 server.go:449] "Adding debug handlers to kubelet server" May 16 16:40:59.763224 kubelet[2371]: W0516 16:40:59.763168 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 16 16:40:59.763224 kubelet[2371]: E0516 16:40:59.763220 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" May 16 16:40:59.763489 kubelet[2371]: I0516 16:40:59.763463 2371 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 16:40:59.764158 kubelet[2371]: I0516 16:40:59.764135 2371 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 16:40:59.764359 kubelet[2371]: E0516 16:40:59.763389 2371 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.76:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.76:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18400f77218e5083 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 16:40:59.758252163 +0000 UTC m=+0.332739625,LastTimestamp:2025-05-16 16:40:59.758252163 +0000 UTC m=+0.332739625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 16:40:59.764359 kubelet[2371]: E0516 16:40:59.764341 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:40:59.764548 kubelet[2371]: E0516 16:40:59.764427 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="200ms" May 16 16:40:59.764548 kubelet[2371]: I0516 16:40:59.764440 2371 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 16:40:59.764548 kubelet[2371]: I0516 16:40:59.764500 2371 reconciler.go:26] "Reconciler: start to sync state" May 16 16:40:59.764864 kubelet[2371]: W0516 16:40:59.764804 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 16 16:40:59.764864 kubelet[2371]: E0516 16:40:59.764850 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" May 16 16:40:59.766979 kubelet[2371]: I0516 16:40:59.766806 2371 factory.go:221] Registration of the systemd container factory successfully May 16 16:40:59.766979 kubelet[2371]: I0516 16:40:59.766894 2371 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 16:40:59.769127 kubelet[2371]: I0516 16:40:59.769064 2371 factory.go:221] Registration of the containerd container factory successfully May 16 16:40:59.772386 kubelet[2371]: E0516 16:40:59.770759 2371 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 16:40:59.780781 kubelet[2371]: I0516 16:40:59.780736 2371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 16:40:59.782275 kubelet[2371]: I0516 16:40:59.782255 2371 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 16:40:59.782275 kubelet[2371]: I0516 16:40:59.782272 2371 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 16:40:59.782346 kubelet[2371]: I0516 16:40:59.782289 2371 state_mem.go:36] "Initialized new in-memory state store" May 16 16:40:59.783307 kubelet[2371]: I0516 16:40:59.783282 2371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 16:40:59.783350 kubelet[2371]: I0516 16:40:59.783310 2371 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 16:40:59.783350 kubelet[2371]: I0516 16:40:59.783327 2371 kubelet.go:2321] "Starting kubelet main sync loop" May 16 16:40:59.783410 kubelet[2371]: E0516 16:40:59.783360 2371 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 16:40:59.784044 kubelet[2371]: W0516 16:40:59.783744 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 16 16:40:59.784044 kubelet[2371]: E0516 16:40:59.783771 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" May 16 16:40:59.864845 kubelet[2371]: E0516 16:40:59.864815 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:40:59.884177 kubelet[2371]: E0516 16:40:59.884140 2371 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 16:40:59.965103 kubelet[2371]: E0516 16:40:59.965018 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:40:59.965339 kubelet[2371]: E0516 16:40:59.965242 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="400ms" May 16 16:41:00.065714 kubelet[2371]: E0516 16:41:00.065669 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:00.084927 kubelet[2371]: E0516 16:41:00.084870 2371 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 16:41:00.158394 kubelet[2371]: I0516 16:41:00.158330 2371 policy_none.go:49] "None policy: Start" May 16 16:41:00.159064 kubelet[2371]: I0516 16:41:00.159034 2371 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 16:41:00.159064 kubelet[2371]: I0516 16:41:00.159054 2371 state_mem.go:35] "Initializing new in-memory state store" May 16 16:41:00.165972 kubelet[2371]: E0516 16:41:00.165947 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:00.166329 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 16:41:00.179449 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 16:41:00.182725 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 16:41:00.200281 kubelet[2371]: I0516 16:41:00.200258 2371 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 16:41:00.200509 kubelet[2371]: I0516 16:41:00.200488 2371 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 16:41:00.200576 kubelet[2371]: I0516 16:41:00.200502 2371 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 16:41:00.200712 kubelet[2371]: I0516 16:41:00.200693 2371 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 16:41:00.202617 kubelet[2371]: E0516 16:41:00.202595 2371 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 16:41:00.302659 kubelet[2371]: I0516 16:41:00.302558 2371 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:41:00.302933 kubelet[2371]: E0516 16:41:00.302884 2371 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" May 16 16:41:00.366553 kubelet[2371]: E0516 16:41:00.366505 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="800ms" May 16 16:41:00.494421 systemd[1]: Created slice kubepods-burstable-pod281d56150c36a126db1af01d90a85f2f.slice - libcontainer container kubepods-burstable-pod281d56150c36a126db1af01d90a85f2f.slice. May 16 16:41:00.503901 kubelet[2371]: I0516 16:41:00.503857 2371 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:41:00.504275 kubelet[2371]: E0516 16:41:00.504155 2371 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" May 16 16:41:00.508144 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 16 16:41:00.521235 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 16 16:41:00.567611 kubelet[2371]: I0516 16:41:00.567452 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/281d56150c36a126db1af01d90a85f2f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"281d56150c36a126db1af01d90a85f2f\") " pod="kube-system/kube-apiserver-localhost" May 16 16:41:00.567611 kubelet[2371]: I0516 16:41:00.567490 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/281d56150c36a126db1af01d90a85f2f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"281d56150c36a126db1af01d90a85f2f\") " pod="kube-system/kube-apiserver-localhost" May 16 16:41:00.567611 kubelet[2371]: I0516 16:41:00.567520 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:41:00.567611 kubelet[2371]: I0516 16:41:00.567534 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:41:00.567611 kubelet[2371]: I0516 16:41:00.567548 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:41:00.567850 kubelet[2371]: I0516 16:41:00.567561 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/281d56150c36a126db1af01d90a85f2f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"281d56150c36a126db1af01d90a85f2f\") " pod="kube-system/kube-apiserver-localhost" May 16 16:41:00.567850 kubelet[2371]: I0516 16:41:00.567609 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:41:00.567850 kubelet[2371]: I0516 16:41:00.567639 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:41:00.567850 kubelet[2371]: I0516 16:41:00.567669 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 16:41:00.807079 kubelet[2371]: E0516 16:41:00.807041 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:00.807811 containerd[1594]: time="2025-05-16T16:41:00.807768045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:281d56150c36a126db1af01d90a85f2f,Namespace:kube-system,Attempt:0,}" May 16 16:41:00.819046 kubelet[2371]: E0516 16:41:00.818976 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:00.819454 kubelet[2371]: W0516 16:41:00.819168 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 16 16:41:00.819454 kubelet[2371]: E0516 16:41:00.819198 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" May 16 16:41:00.819533 containerd[1594]: time="2025-05-16T16:41:00.819294806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 16 16:41:00.823679 kubelet[2371]: E0516 16:41:00.823628 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:00.823944 containerd[1594]: time="2025-05-16T16:41:00.823919491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 16 16:41:00.905707 kubelet[2371]: I0516 16:41:00.905657 2371 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:41:00.906106 kubelet[2371]: E0516 16:41:00.906065 2371 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" May 16 16:41:00.936582 kubelet[2371]: W0516 16:41:00.936548 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 16 16:41:00.936660 kubelet[2371]: E0516 16:41:00.936594 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" May 16 16:41:00.963358 kubelet[2371]: W0516 16:41:00.963302 2371 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 16 16:41:00.963358 kubelet[2371]: E0516 16:41:00.963347 2371 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" May 16 16:41:00.987552 containerd[1594]: time="2025-05-16T16:41:00.987425050Z" level=info msg="connecting to shim d89b2c8e48190327388b47e54b5d08fd4b546603d2936f620f1ee7cd099d6b5b" address="unix:///run/containerd/s/ef2a594c76464760906faa6fa5b6e8be279318ae0d672a385282dfa95626f0a7" namespace=k8s.io protocol=ttrpc version=3 May 16 16:41:00.988194 containerd[1594]: time="2025-05-16T16:41:00.988165649Z" level=info msg="connecting to shim b31077ab6c3fa76a41ba87cd796a7b5147c37807ac8273deafc7ac895f6d8903" address="unix:///run/containerd/s/87ceeffa198513ff7231174bf042b7f0760e8ff8eaf5c44197c16b0374f2a19f" namespace=k8s.io protocol=ttrpc version=3 May 16 16:41:01.003396 containerd[1594]: time="2025-05-16T16:41:01.003331526Z" level=info msg="connecting to shim 149e9ba0386072a91dea6971af81e6c7202723f9ad2baac083c0442cc00a3185" address="unix:///run/containerd/s/6c2a3e455c55f654b507a4191cd24de4275be8ed08c63bcd862e0aebc11ed24a" namespace=k8s.io protocol=ttrpc version=3 May 16 16:41:01.017589 systemd[1]: Started cri-containerd-b31077ab6c3fa76a41ba87cd796a7b5147c37807ac8273deafc7ac895f6d8903.scope - libcontainer container b31077ab6c3fa76a41ba87cd796a7b5147c37807ac8273deafc7ac895f6d8903. May 16 16:41:01.021118 systemd[1]: Started cri-containerd-d89b2c8e48190327388b47e54b5d08fd4b546603d2936f620f1ee7cd099d6b5b.scope - libcontainer container d89b2c8e48190327388b47e54b5d08fd4b546603d2936f620f1ee7cd099d6b5b. May 16 16:41:01.034520 systemd[1]: Started cri-containerd-149e9ba0386072a91dea6971af81e6c7202723f9ad2baac083c0442cc00a3185.scope - libcontainer container 149e9ba0386072a91dea6971af81e6c7202723f9ad2baac083c0442cc00a3185. May 16 16:41:01.070884 containerd[1594]: time="2025-05-16T16:41:01.070787534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:281d56150c36a126db1af01d90a85f2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b31077ab6c3fa76a41ba87cd796a7b5147c37807ac8273deafc7ac895f6d8903\"" May 16 16:41:01.072337 kubelet[2371]: E0516 16:41:01.072315 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:01.076004 containerd[1594]: time="2025-05-16T16:41:01.075958704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d89b2c8e48190327388b47e54b5d08fd4b546603d2936f620f1ee7cd099d6b5b\"" May 16 16:41:01.077400 kubelet[2371]: E0516 16:41:01.077214 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:01.077458 containerd[1594]: time="2025-05-16T16:41:01.077294560Z" level=info msg="CreateContainer within sandbox \"b31077ab6c3fa76a41ba87cd796a7b5147c37807ac8273deafc7ac895f6d8903\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 16:41:01.079472 containerd[1594]: time="2025-05-16T16:41:01.079428482Z" level=info msg="CreateContainer within sandbox \"d89b2c8e48190327388b47e54b5d08fd4b546603d2936f620f1ee7cd099d6b5b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 16:41:01.085494 containerd[1594]: time="2025-05-16T16:41:01.085449216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"149e9ba0386072a91dea6971af81e6c7202723f9ad2baac083c0442cc00a3185\"" May 16 16:41:01.085958 kubelet[2371]: E0516 16:41:01.085932 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:01.087402 containerd[1594]: time="2025-05-16T16:41:01.087355101Z" level=info msg="CreateContainer within sandbox \"149e9ba0386072a91dea6971af81e6c7202723f9ad2baac083c0442cc00a3185\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 16:41:01.092580 containerd[1594]: time="2025-05-16T16:41:01.092555355Z" level=info msg="Container b59e8c4dec652bc9cf6f055f6ae6cc07835989e351da7d861513ccd9bd7e138d: CDI devices from CRI Config.CDIDevices: []" May 16 16:41:01.095620 containerd[1594]: time="2025-05-16T16:41:01.095582753Z" level=info msg="Container 54741fe9bbbe3490094f9bca59800b508fcb614c498bd389237b9e30ccb2e7be: CDI devices from CRI Config.CDIDevices: []" May 16 16:41:01.097525 containerd[1594]: time="2025-05-16T16:41:01.097486955Z" level=info msg="Container 5169987267d660db2087f922d7d9e015d2202a0c9baa5f53d71bf4626ee671a5: CDI devices from CRI Config.CDIDevices: []" May 16 16:41:01.104815 containerd[1594]: time="2025-05-16T16:41:01.104733608Z" level=info msg="CreateContainer within sandbox \"b31077ab6c3fa76a41ba87cd796a7b5147c37807ac8273deafc7ac895f6d8903\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b59e8c4dec652bc9cf6f055f6ae6cc07835989e351da7d861513ccd9bd7e138d\"" May 16 16:41:01.105223 containerd[1594]: time="2025-05-16T16:41:01.105199752Z" level=info msg="StartContainer for \"b59e8c4dec652bc9cf6f055f6ae6cc07835989e351da7d861513ccd9bd7e138d\"" May 16 16:41:01.106439 containerd[1594]: time="2025-05-16T16:41:01.106412988Z" level=info msg="connecting to shim b59e8c4dec652bc9cf6f055f6ae6cc07835989e351da7d861513ccd9bd7e138d" address="unix:///run/containerd/s/87ceeffa198513ff7231174bf042b7f0760e8ff8eaf5c44197c16b0374f2a19f" protocol=ttrpc version=3 May 16 16:41:01.109820 containerd[1594]: time="2025-05-16T16:41:01.109782428Z" level=info msg="CreateContainer within sandbox \"149e9ba0386072a91dea6971af81e6c7202723f9ad2baac083c0442cc00a3185\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5169987267d660db2087f922d7d9e015d2202a0c9baa5f53d71bf4626ee671a5\"" May 16 16:41:01.110242 containerd[1594]: time="2025-05-16T16:41:01.110217925Z" level=info msg="StartContainer for \"5169987267d660db2087f922d7d9e015d2202a0c9baa5f53d71bf4626ee671a5\"" May 16 16:41:01.111393 containerd[1594]: time="2025-05-16T16:41:01.111164029Z" level=info msg="connecting to shim 5169987267d660db2087f922d7d9e015d2202a0c9baa5f53d71bf4626ee671a5" address="unix:///run/containerd/s/6c2a3e455c55f654b507a4191cd24de4275be8ed08c63bcd862e0aebc11ed24a" protocol=ttrpc version=3 May 16 16:41:01.111760 containerd[1594]: time="2025-05-16T16:41:01.111731934Z" level=info msg="CreateContainer within sandbox \"d89b2c8e48190327388b47e54b5d08fd4b546603d2936f620f1ee7cd099d6b5b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"54741fe9bbbe3490094f9bca59800b508fcb614c498bd389237b9e30ccb2e7be\"" May 16 16:41:01.112101 containerd[1594]: time="2025-05-16T16:41:01.112069147Z" level=info msg="StartContainer for \"54741fe9bbbe3490094f9bca59800b508fcb614c498bd389237b9e30ccb2e7be\"" May 16 16:41:01.113083 containerd[1594]: time="2025-05-16T16:41:01.113032855Z" level=info msg="connecting to shim 54741fe9bbbe3490094f9bca59800b508fcb614c498bd389237b9e30ccb2e7be" address="unix:///run/containerd/s/ef2a594c76464760906faa6fa5b6e8be279318ae0d672a385282dfa95626f0a7" protocol=ttrpc version=3 May 16 16:41:01.128528 systemd[1]: Started cri-containerd-b59e8c4dec652bc9cf6f055f6ae6cc07835989e351da7d861513ccd9bd7e138d.scope - libcontainer container b59e8c4dec652bc9cf6f055f6ae6cc07835989e351da7d861513ccd9bd7e138d. May 16 16:41:01.134012 systemd[1]: Started cri-containerd-5169987267d660db2087f922d7d9e015d2202a0c9baa5f53d71bf4626ee671a5.scope - libcontainer container 5169987267d660db2087f922d7d9e015d2202a0c9baa5f53d71bf4626ee671a5. May 16 16:41:01.136052 systemd[1]: Started cri-containerd-54741fe9bbbe3490094f9bca59800b508fcb614c498bd389237b9e30ccb2e7be.scope - libcontainer container 54741fe9bbbe3490094f9bca59800b508fcb614c498bd389237b9e30ccb2e7be. May 16 16:41:01.167964 kubelet[2371]: E0516 16:41:01.167913 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="1.6s" May 16 16:41:01.180905 containerd[1594]: time="2025-05-16T16:41:01.180746196Z" level=info msg="StartContainer for \"b59e8c4dec652bc9cf6f055f6ae6cc07835989e351da7d861513ccd9bd7e138d\" returns successfully" May 16 16:41:01.195948 containerd[1594]: time="2025-05-16T16:41:01.195889209Z" level=info msg="StartContainer for \"54741fe9bbbe3490094f9bca59800b508fcb614c498bd389237b9e30ccb2e7be\" returns successfully" May 16 16:41:01.199526 containerd[1594]: time="2025-05-16T16:41:01.199473753Z" level=info msg="StartContainer for \"5169987267d660db2087f922d7d9e015d2202a0c9baa5f53d71bf4626ee671a5\" returns successfully" May 16 16:41:01.708741 kubelet[2371]: I0516 16:41:01.708704 2371 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:41:01.791866 kubelet[2371]: E0516 16:41:01.791839 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:01.793742 kubelet[2371]: E0516 16:41:01.793714 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:01.796562 kubelet[2371]: E0516 16:41:01.796463 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:02.323888 kubelet[2371]: I0516 16:41:02.323850 2371 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 16:41:02.323888 kubelet[2371]: E0516 16:41:02.323886 2371 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 16:41:02.331321 kubelet[2371]: E0516 16:41:02.331287 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:02.431772 kubelet[2371]: E0516 16:41:02.431717 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:02.532320 kubelet[2371]: E0516 16:41:02.532274 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:02.633300 kubelet[2371]: E0516 16:41:02.633154 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:02.733785 kubelet[2371]: E0516 16:41:02.733737 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:02.798465 kubelet[2371]: E0516 16:41:02.798423 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:02.834616 kubelet[2371]: E0516 16:41:02.834572 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:02.935220 kubelet[2371]: E0516 16:41:02.935082 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:03.035721 kubelet[2371]: E0516 16:41:03.035680 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:03.136341 kubelet[2371]: E0516 16:41:03.136271 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:03.237071 kubelet[2371]: E0516 16:41:03.237022 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:03.755288 kubelet[2371]: I0516 16:41:03.755248 2371 apiserver.go:52] "Watching apiserver" May 16 16:41:03.765630 kubelet[2371]: I0516 16:41:03.765591 2371 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 16:41:04.364220 systemd[1]: Reload requested from client PID 2642 ('systemctl') (unit session-7.scope)... May 16 16:41:04.364237 systemd[1]: Reloading... May 16 16:41:04.449432 zram_generator::config[2688]: No configuration found. May 16 16:41:04.549336 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:41:04.680101 systemd[1]: Reloading finished in 315 ms. May 16 16:41:04.708798 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:41:04.725963 systemd[1]: kubelet.service: Deactivated successfully. May 16 16:41:04.726268 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:41:04.726326 systemd[1]: kubelet.service: Consumed 740ms CPU time, 131.1M memory peak. May 16 16:41:04.728542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:41:04.963664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:41:04.982832 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 16:41:05.024807 kubelet[2729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:41:05.024807 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 16:41:05.024807 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:41:05.025330 kubelet[2729]: I0516 16:41:05.024865 2729 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 16:41:05.031284 kubelet[2729]: I0516 16:41:05.031259 2729 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 16:41:05.031342 kubelet[2729]: I0516 16:41:05.031333 2729 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 16:41:05.031713 kubelet[2729]: I0516 16:41:05.031674 2729 server.go:934] "Client rotation is on, will bootstrap in background" May 16 16:41:05.032892 kubelet[2729]: I0516 16:41:05.032865 2729 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 16:41:05.034617 kubelet[2729]: I0516 16:41:05.034574 2729 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:41:05.039093 kubelet[2729]: I0516 16:41:05.039067 2729 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 16:41:05.043559 kubelet[2729]: I0516 16:41:05.043523 2729 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 16:41:05.043638 kubelet[2729]: I0516 16:41:05.043622 2729 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 16:41:05.043772 kubelet[2729]: I0516 16:41:05.043734 2729 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 16:41:05.043927 kubelet[2729]: I0516 16:41:05.043762 2729 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 16:41:05.044004 kubelet[2729]: I0516 16:41:05.043928 2729 topology_manager.go:138] "Creating topology manager with none policy" May 16 16:41:05.044004 kubelet[2729]: I0516 16:41:05.043937 2729 container_manager_linux.go:300] "Creating device plugin manager" May 16 16:41:05.044004 kubelet[2729]: I0516 16:41:05.043960 2729 state_mem.go:36] "Initialized new in-memory state store" May 16 16:41:05.044076 kubelet[2729]: I0516 16:41:05.044053 2729 kubelet.go:408] "Attempting to sync node with API server" May 16 16:41:05.044076 kubelet[2729]: I0516 16:41:05.044064 2729 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 16:41:05.044118 kubelet[2729]: I0516 16:41:05.044094 2729 kubelet.go:314] "Adding apiserver pod source" May 16 16:41:05.044118 kubelet[2729]: I0516 16:41:05.044103 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 16:41:05.044826 kubelet[2729]: I0516 16:41:05.044795 2729 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 16:41:05.045492 kubelet[2729]: I0516 16:41:05.045467 2729 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 16:41:05.046418 kubelet[2729]: I0516 16:41:05.046394 2729 server.go:1274] "Started kubelet" May 16 16:41:05.047597 kubelet[2729]: I0516 16:41:05.047432 2729 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 16:41:05.047900 kubelet[2729]: I0516 16:41:05.047822 2729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 16:41:05.048259 kubelet[2729]: I0516 16:41:05.048247 2729 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 16:41:05.051972 kubelet[2729]: I0516 16:41:05.051945 2729 server.go:449] "Adding debug handlers to kubelet server" May 16 16:41:05.053217 kubelet[2729]: I0516 16:41:05.053203 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 16:41:05.053573 kubelet[2729]: I0516 16:41:05.053547 2729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 16:41:05.053737 kubelet[2729]: I0516 16:41:05.053713 2729 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 16:41:05.053846 kubelet[2729]: I0516 16:41:05.053826 2729 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 16:41:05.053978 kubelet[2729]: I0516 16:41:05.053968 2729 reconciler.go:26] "Reconciler: start to sync state" May 16 16:41:05.054131 kubelet[2729]: E0516 16:41:05.054084 2729 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:41:05.054896 kubelet[2729]: E0516 16:41:05.054869 2729 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 16:41:05.058997 kubelet[2729]: I0516 16:41:05.058961 2729 factory.go:221] Registration of the containerd container factory successfully May 16 16:41:05.058997 kubelet[2729]: I0516 16:41:05.058981 2729 factory.go:221] Registration of the systemd container factory successfully May 16 16:41:05.059262 kubelet[2729]: I0516 16:41:05.059077 2729 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 16:41:05.066663 kubelet[2729]: I0516 16:41:05.066604 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 16:41:05.068289 kubelet[2729]: I0516 16:41:05.068102 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 16:41:05.068289 kubelet[2729]: I0516 16:41:05.068136 2729 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 16:41:05.068289 kubelet[2729]: I0516 16:41:05.068159 2729 kubelet.go:2321] "Starting kubelet main sync loop" May 16 16:41:05.068289 kubelet[2729]: E0516 16:41:05.068215 2729 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 16:41:05.099704 kubelet[2729]: I0516 16:41:05.099672 2729 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 16:41:05.099704 kubelet[2729]: I0516 16:41:05.099691 2729 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 16:41:05.099704 kubelet[2729]: I0516 16:41:05.099719 2729 state_mem.go:36] "Initialized new in-memory state store" May 16 16:41:05.099982 kubelet[2729]: I0516 16:41:05.099912 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 16:41:05.099982 kubelet[2729]: I0516 16:41:05.099926 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 16:41:05.099982 kubelet[2729]: I0516 16:41:05.099945 2729 policy_none.go:49] "None policy: Start" May 16 16:41:05.100721 kubelet[2729]: I0516 16:41:05.100698 2729 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 16:41:05.100772 kubelet[2729]: I0516 16:41:05.100726 2729 state_mem.go:35] "Initializing new in-memory state store" May 16 16:41:05.100924 kubelet[2729]: I0516 16:41:05.100906 2729 state_mem.go:75] "Updated machine memory state" May 16 16:41:05.106891 kubelet[2729]: I0516 16:41:05.106635 2729 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 16:41:05.106891 kubelet[2729]: I0516 16:41:05.106810 2729 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 16:41:05.106891 kubelet[2729]: I0516 16:41:05.106822 2729 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 16:41:05.107084 kubelet[2729]: I0516 16:41:05.107067 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 16:41:05.214064 kubelet[2729]: I0516 16:41:05.213930 2729 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:41:05.248678 kubelet[2729]: I0516 16:41:05.248634 2729 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 16 16:41:05.248825 kubelet[2729]: I0516 16:41:05.248730 2729 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 16:41:05.355828 kubelet[2729]: I0516 16:41:05.355764 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/281d56150c36a126db1af01d90a85f2f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"281d56150c36a126db1af01d90a85f2f\") " pod="kube-system/kube-apiserver-localhost" May 16 16:41:05.355828 kubelet[2729]: I0516 16:41:05.355814 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:41:05.355828 kubelet[2729]: I0516 16:41:05.355838 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:41:05.356076 kubelet[2729]: I0516 16:41:05.355856 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:41:05.356076 kubelet[2729]: I0516 16:41:05.355882 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 16:41:05.356076 kubelet[2729]: I0516 16:41:05.355900 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/281d56150c36a126db1af01d90a85f2f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"281d56150c36a126db1af01d90a85f2f\") " pod="kube-system/kube-apiserver-localhost" May 16 16:41:05.356076 kubelet[2729]: I0516 16:41:05.355919 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/281d56150c36a126db1af01d90a85f2f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"281d56150c36a126db1af01d90a85f2f\") " pod="kube-system/kube-apiserver-localhost" May 16 16:41:05.356076 kubelet[2729]: I0516 16:41:05.355942 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:41:05.356190 kubelet[2729]: I0516 16:41:05.355961 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:41:05.358191 sudo[2766]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 16:41:05.358562 sudo[2766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 16:41:05.483567 kubelet[2729]: E0516 16:41:05.483456 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:05.485722 kubelet[2729]: E0516 16:41:05.485601 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:05.485877 kubelet[2729]: E0516 16:41:05.485838 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:05.819433 sudo[2766]: pam_unix(sudo:session): session closed for user root May 16 16:41:06.045052 kubelet[2729]: I0516 16:41:06.045015 2729 apiserver.go:52] "Watching apiserver" May 16 16:41:06.054329 kubelet[2729]: I0516 16:41:06.054291 2729 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 16:41:06.083726 kubelet[2729]: E0516 16:41:06.083442 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:06.084286 kubelet[2729]: E0516 16:41:06.084252 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:06.174720 kubelet[2729]: E0516 16:41:06.174630 2729 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 16:41:06.174837 kubelet[2729]: E0516 16:41:06.174804 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:06.242784 kubelet[2729]: I0516 16:41:06.242711 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.24269179 podStartE2EDuration="1.24269179s" podCreationTimestamp="2025-05-16 16:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:41:06.175671628 +0000 UTC m=+1.188651151" watchObservedRunningTime="2025-05-16 16:41:06.24269179 +0000 UTC m=+1.255671303" May 16 16:41:06.242975 kubelet[2729]: I0516 16:41:06.242835 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.242829706 podStartE2EDuration="1.242829706s" podCreationTimestamp="2025-05-16 16:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:41:06.242523987 +0000 UTC m=+1.255503510" watchObservedRunningTime="2025-05-16 16:41:06.242829706 +0000 UTC m=+1.255809229" May 16 16:41:06.320578 kubelet[2729]: I0516 16:41:06.320347 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.320324215 podStartE2EDuration="1.320324215s" podCreationTimestamp="2025-05-16 16:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:41:06.3203167 +0000 UTC m=+1.333296223" watchObservedRunningTime="2025-05-16 16:41:06.320324215 +0000 UTC m=+1.333303738" May 16 16:41:07.085083 kubelet[2729]: E0516 16:41:07.085042 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:07.177273 sudo[1807]: pam_unix(sudo:session): session closed for user root May 16 16:41:07.178631 sshd[1806]: Connection closed by 10.0.0.1 port 47424 May 16 16:41:07.178996 sshd-session[1804]: pam_unix(sshd:session): session closed for user core May 16 16:41:07.183564 systemd[1]: sshd@6-10.0.0.76:22-10.0.0.1:47424.service: Deactivated successfully. May 16 16:41:07.185783 systemd[1]: session-7.scope: Deactivated successfully. May 16 16:41:07.186011 systemd[1]: session-7.scope: Consumed 4.396s CPU time, 264.8M memory peak. May 16 16:41:07.187273 systemd-logind[1572]: Session 7 logged out. Waiting for processes to exit. May 16 16:41:07.188740 systemd-logind[1572]: Removed session 7. May 16 16:41:09.048580 kubelet[2729]: E0516 16:41:09.048555 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:10.159908 kubelet[2729]: I0516 16:41:10.159865 2729 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 16:41:10.160303 kubelet[2729]: I0516 16:41:10.160283 2729 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 16:41:10.160330 containerd[1594]: time="2025-05-16T16:41:10.160148175Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 16:41:11.123490 systemd[1]: Created slice kubepods-besteffort-podd2e633f8_82d8_4985_84f7_ecb29b0e8c90.slice - libcontainer container kubepods-besteffort-podd2e633f8_82d8_4985_84f7_ecb29b0e8c90.slice. May 16 16:41:11.193010 kubelet[2729]: I0516 16:41:11.192978 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh45x\" (UniqueName: \"kubernetes.io/projected/d2e633f8-82d8-4985-84f7-ecb29b0e8c90-kube-api-access-kh45x\") pod \"kube-proxy-cn2s2\" (UID: \"d2e633f8-82d8-4985-84f7-ecb29b0e8c90\") " pod="kube-system/kube-proxy-cn2s2" May 16 16:41:11.193392 kubelet[2729]: I0516 16:41:11.193013 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2e633f8-82d8-4985-84f7-ecb29b0e8c90-kube-proxy\") pod \"kube-proxy-cn2s2\" (UID: \"d2e633f8-82d8-4985-84f7-ecb29b0e8c90\") " pod="kube-system/kube-proxy-cn2s2" May 16 16:41:11.193392 kubelet[2729]: I0516 16:41:11.193032 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2e633f8-82d8-4985-84f7-ecb29b0e8c90-xtables-lock\") pod \"kube-proxy-cn2s2\" (UID: \"d2e633f8-82d8-4985-84f7-ecb29b0e8c90\") " pod="kube-system/kube-proxy-cn2s2" May 16 16:41:11.193392 kubelet[2729]: I0516 16:41:11.193044 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e633f8-82d8-4985-84f7-ecb29b0e8c90-lib-modules\") pod \"kube-proxy-cn2s2\" (UID: \"d2e633f8-82d8-4985-84f7-ecb29b0e8c90\") " pod="kube-system/kube-proxy-cn2s2" May 16 16:41:11.306842 systemd[1]: Created slice kubepods-burstable-poddcce061b_d8de_4286_998a_b00bc4f7fefd.slice - libcontainer container kubepods-burstable-poddcce061b_d8de_4286_998a_b00bc4f7fefd.slice. May 16 16:41:11.336306 kubelet[2729]: E0516 16:41:11.336271 2729 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 16 16:41:11.336306 kubelet[2729]: E0516 16:41:11.336305 2729 projected.go:194] Error preparing data for projected volume kube-api-access-kh45x for pod kube-system/kube-proxy-cn2s2: configmap "kube-root-ca.crt" not found May 16 16:41:11.336488 kubelet[2729]: E0516 16:41:11.336381 2729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d2e633f8-82d8-4985-84f7-ecb29b0e8c90-kube-api-access-kh45x podName:d2e633f8-82d8-4985-84f7-ecb29b0e8c90 nodeName:}" failed. No retries permitted until 2025-05-16 16:41:11.836346353 +0000 UTC m=+6.849325876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kh45x" (UniqueName: "kubernetes.io/projected/d2e633f8-82d8-4985-84f7-ecb29b0e8c90-kube-api-access-kh45x") pod "kube-proxy-cn2s2" (UID: "d2e633f8-82d8-4985-84f7-ecb29b0e8c90") : configmap "kube-root-ca.crt" not found May 16 16:41:11.395493 kubelet[2729]: I0516 16:41:11.395341 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-xtables-lock\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395493 kubelet[2729]: I0516 16:41:11.395400 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-bpf-maps\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395493 kubelet[2729]: I0516 16:41:11.395419 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dcce061b-d8de-4286-998a-b00bc4f7fefd-clustermesh-secrets\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395493 kubelet[2729]: I0516 16:41:11.395437 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-host-proc-sys-kernel\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395704 kubelet[2729]: I0516 16:41:11.395504 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dcce061b-d8de-4286-998a-b00bc4f7fefd-hubble-tls\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395704 kubelet[2729]: I0516 16:41:11.395547 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldtff\" (UniqueName: \"kubernetes.io/projected/dcce061b-d8de-4286-998a-b00bc4f7fefd-kube-api-access-ldtff\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395704 kubelet[2729]: I0516 16:41:11.395582 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-cni-path\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395704 kubelet[2729]: I0516 16:41:11.395604 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-etc-cni-netd\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395704 kubelet[2729]: I0516 16:41:11.395622 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-cilium-cgroup\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395704 kubelet[2729]: I0516 16:41:11.395639 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-cilium-run\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395853 kubelet[2729]: I0516 16:41:11.395654 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcce061b-d8de-4286-998a-b00bc4f7fefd-cilium-config-path\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395853 kubelet[2729]: I0516 16:41:11.395670 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-host-proc-sys-net\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395853 kubelet[2729]: I0516 16:41:11.395685 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-lib-modules\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.395853 kubelet[2729]: I0516 16:41:11.395717 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-hostproc\") pod \"cilium-gd9p2\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " pod="kube-system/cilium-gd9p2" May 16 16:41:11.900581 kubelet[2729]: E0516 16:41:11.900536 2729 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 16 16:41:11.900581 kubelet[2729]: E0516 16:41:11.900562 2729 projected.go:194] Error preparing data for projected volume kube-api-access-kh45x for pod kube-system/kube-proxy-cn2s2: configmap "kube-root-ca.crt" not found May 16 16:41:11.900763 kubelet[2729]: E0516 16:41:11.900603 2729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d2e633f8-82d8-4985-84f7-ecb29b0e8c90-kube-api-access-kh45x podName:d2e633f8-82d8-4985-84f7-ecb29b0e8c90 nodeName:}" failed. No retries permitted until 2025-05-16 16:41:12.900588715 +0000 UTC m=+7.913568238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kh45x" (UniqueName: "kubernetes.io/projected/d2e633f8-82d8-4985-84f7-ecb29b0e8c90-kube-api-access-kh45x") pod "kube-proxy-cn2s2" (UID: "d2e633f8-82d8-4985-84f7-ecb29b0e8c90") : configmap "kube-root-ca.crt" not found May 16 16:41:12.086950 systemd[1]: Created slice kubepods-besteffort-podaf50d0bd_ef28_4385_9f84_f0924ae94701.slice - libcontainer container kubepods-besteffort-podaf50d0bd_ef28_4385_9f84_f0924ae94701.slice. May 16 16:41:12.103659 kubelet[2729]: I0516 16:41:12.102363 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af50d0bd-ef28-4385-9f84-f0924ae94701-cilium-config-path\") pod \"cilium-operator-5d85765b45-v4nr4\" (UID: \"af50d0bd-ef28-4385-9f84-f0924ae94701\") " pod="kube-system/cilium-operator-5d85765b45-v4nr4" May 16 16:41:12.103861 kubelet[2729]: I0516 16:41:12.103815 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzkbc\" (UniqueName: \"kubernetes.io/projected/af50d0bd-ef28-4385-9f84-f0924ae94701-kube-api-access-fzkbc\") pod \"cilium-operator-5d85765b45-v4nr4\" (UID: \"af50d0bd-ef28-4385-9f84-f0924ae94701\") " pod="kube-system/cilium-operator-5d85765b45-v4nr4" May 16 16:41:12.211251 kubelet[2729]: E0516 16:41:12.211221 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:12.211974 containerd[1594]: time="2025-05-16T16:41:12.211934083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gd9p2,Uid:dcce061b-d8de-4286-998a-b00bc4f7fefd,Namespace:kube-system,Attempt:0,}" May 16 16:41:12.395498 kubelet[2729]: E0516 16:41:12.395352 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:12.396417 containerd[1594]: time="2025-05-16T16:41:12.396194334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-v4nr4,Uid:af50d0bd-ef28-4385-9f84-f0924ae94701,Namespace:kube-system,Attempt:0,}" May 16 16:41:12.403288 containerd[1594]: time="2025-05-16T16:41:12.403239345Z" level=info msg="connecting to shim 73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93" address="unix:///run/containerd/s/affea317d323c205e82dc02bab75f0624ec5af9083211c4cc6408c5416c35785" namespace=k8s.io protocol=ttrpc version=3 May 16 16:41:12.420750 containerd[1594]: time="2025-05-16T16:41:12.420697830Z" level=info msg="connecting to shim 927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4" address="unix:///run/containerd/s/f3709c4c9819a793d8a72e5f329144e060ed256647d8bd8e40739c433c85027a" namespace=k8s.io protocol=ttrpc version=3 May 16 16:41:12.433620 systemd[1]: Started cri-containerd-73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93.scope - libcontainer container 73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93. May 16 16:41:12.455666 systemd[1]: Started cri-containerd-927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4.scope - libcontainer container 927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4. May 16 16:41:12.479964 containerd[1594]: time="2025-05-16T16:41:12.479863379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gd9p2,Uid:dcce061b-d8de-4286-998a-b00bc4f7fefd,Namespace:kube-system,Attempt:0,} returns sandbox id \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\"" May 16 16:41:12.480643 kubelet[2729]: E0516 16:41:12.480622 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:12.481879 containerd[1594]: time="2025-05-16T16:41:12.481858829Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 16:41:12.507213 containerd[1594]: time="2025-05-16T16:41:12.507168845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-v4nr4,Uid:af50d0bd-ef28-4385-9f84-f0924ae94701,Namespace:kube-system,Attempt:0,} returns sandbox id \"927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4\"" May 16 16:41:12.507972 kubelet[2729]: E0516 16:41:12.507939 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:12.679895 kubelet[2729]: E0516 16:41:12.679860 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:12.934747 kubelet[2729]: E0516 16:41:12.934645 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:12.935087 containerd[1594]: time="2025-05-16T16:41:12.935033136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cn2s2,Uid:d2e633f8-82d8-4985-84f7-ecb29b0e8c90,Namespace:kube-system,Attempt:0,}" May 16 16:41:12.958805 containerd[1594]: time="2025-05-16T16:41:12.958760861Z" level=info msg="connecting to shim 0a2d943c7ed1e6e203367df536ccbecc1d5e8816a35c2a9d7a1126091d25647c" address="unix:///run/containerd/s/07b9c7c01b782c780991c67993d7dd484b9534dbef82ec15916d3ce8c0b9405d" namespace=k8s.io protocol=ttrpc version=3 May 16 16:41:12.989618 systemd[1]: Started cri-containerd-0a2d943c7ed1e6e203367df536ccbecc1d5e8816a35c2a9d7a1126091d25647c.scope - libcontainer container 0a2d943c7ed1e6e203367df536ccbecc1d5e8816a35c2a9d7a1126091d25647c. May 16 16:41:13.017263 containerd[1594]: time="2025-05-16T16:41:13.017209305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cn2s2,Uid:d2e633f8-82d8-4985-84f7-ecb29b0e8c90,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a2d943c7ed1e6e203367df536ccbecc1d5e8816a35c2a9d7a1126091d25647c\"" May 16 16:41:13.018026 kubelet[2729]: E0516 16:41:13.018002 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:13.020034 containerd[1594]: time="2025-05-16T16:41:13.020001191Z" level=info msg="CreateContainer within sandbox \"0a2d943c7ed1e6e203367df536ccbecc1d5e8816a35c2a9d7a1126091d25647c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 16:41:13.031123 containerd[1594]: time="2025-05-16T16:41:13.031072557Z" level=info msg="Container 2d78dffeaeb6742c27de6cb3632d6dcaf5f2cca8ed5e6cc6ffa285d880831a8d: CDI devices from CRI Config.CDIDevices: []" May 16 16:41:13.039226 containerd[1594]: time="2025-05-16T16:41:13.039200008Z" level=info msg="CreateContainer within sandbox \"0a2d943c7ed1e6e203367df536ccbecc1d5e8816a35c2a9d7a1126091d25647c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d78dffeaeb6742c27de6cb3632d6dcaf5f2cca8ed5e6cc6ffa285d880831a8d\"" May 16 16:41:13.040067 containerd[1594]: time="2025-05-16T16:41:13.039772430Z" level=info msg="StartContainer for \"2d78dffeaeb6742c27de6cb3632d6dcaf5f2cca8ed5e6cc6ffa285d880831a8d\"" May 16 16:41:13.041129 containerd[1594]: time="2025-05-16T16:41:13.041054324Z" level=info msg="connecting to shim 2d78dffeaeb6742c27de6cb3632d6dcaf5f2cca8ed5e6cc6ffa285d880831a8d" address="unix:///run/containerd/s/07b9c7c01b782c780991c67993d7dd484b9534dbef82ec15916d3ce8c0b9405d" protocol=ttrpc version=3 May 16 16:41:13.064526 systemd[1]: Started cri-containerd-2d78dffeaeb6742c27de6cb3632d6dcaf5f2cca8ed5e6cc6ffa285d880831a8d.scope - libcontainer container 2d78dffeaeb6742c27de6cb3632d6dcaf5f2cca8ed5e6cc6ffa285d880831a8d. May 16 16:41:13.105101 kubelet[2729]: E0516 16:41:13.105069 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:13.116810 containerd[1594]: time="2025-05-16T16:41:13.116711864Z" level=info msg="StartContainer for \"2d78dffeaeb6742c27de6cb3632d6dcaf5f2cca8ed5e6cc6ffa285d880831a8d\" returns successfully" May 16 16:41:14.109389 kubelet[2729]: E0516 16:41:14.108387 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:14.119069 kubelet[2729]: I0516 16:41:14.119021 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cn2s2" podStartSLOduration=4.119005101 podStartE2EDuration="4.119005101s" podCreationTimestamp="2025-05-16 16:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:41:14.118575262 +0000 UTC m=+9.131554815" watchObservedRunningTime="2025-05-16 16:41:14.119005101 +0000 UTC m=+9.131984624" May 16 16:41:14.464844 kubelet[2729]: E0516 16:41:14.464803 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:15.109868 kubelet[2729]: E0516 16:41:15.109455 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:15.109868 kubelet[2729]: E0516 16:41:15.109702 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:19.053018 kubelet[2729]: E0516 16:41:19.052986 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:19.265518 update_engine[1579]: I20250516 16:41:19.265441 1579 update_attempter.cc:509] Updating boot flags... May 16 16:41:22.382945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365370237.mount: Deactivated successfully. May 16 16:41:27.780096 containerd[1594]: time="2025-05-16T16:41:27.780020626Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:41:27.787131 containerd[1594]: time="2025-05-16T16:41:27.787075148Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 16:41:27.803404 containerd[1594]: time="2025-05-16T16:41:27.803290751Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:41:27.804569 containerd[1594]: time="2025-05-16T16:41:27.804535191Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.322588836s" May 16 16:41:27.804632 containerd[1594]: time="2025-05-16T16:41:27.804582570Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 16:41:27.811988 containerd[1594]: time="2025-05-16T16:41:27.811945255Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 16:41:27.822033 containerd[1594]: time="2025-05-16T16:41:27.821991186Z" level=info msg="CreateContainer within sandbox \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 16:41:27.911886 containerd[1594]: time="2025-05-16T16:41:27.911820479Z" level=info msg="Container 647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39: CDI devices from CRI Config.CDIDevices: []" May 16 16:41:27.916443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2463498474.mount: Deactivated successfully. May 16 16:41:28.045242 containerd[1594]: time="2025-05-16T16:41:28.045117584Z" level=info msg="CreateContainer within sandbox \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\"" May 16 16:41:28.045680 containerd[1594]: time="2025-05-16T16:41:28.045652104Z" level=info msg="StartContainer for \"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\"" May 16 16:41:28.046515 containerd[1594]: time="2025-05-16T16:41:28.046482211Z" level=info msg="connecting to shim 647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39" address="unix:///run/containerd/s/affea317d323c205e82dc02bab75f0624ec5af9083211c4cc6408c5416c35785" protocol=ttrpc version=3 May 16 16:41:28.110509 systemd[1]: Started cri-containerd-647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39.scope - libcontainer container 647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39. May 16 16:41:28.177399 containerd[1594]: time="2025-05-16T16:41:28.177304163Z" level=info msg="StartContainer for \"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\" returns successfully" May 16 16:41:28.177818 systemd[1]: cri-containerd-647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39.scope: Deactivated successfully. May 16 16:41:28.178166 systemd[1]: cri-containerd-647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39.scope: Consumed 27ms CPU time, 6.9M memory peak, 4K read from disk, 3.2M written to disk. May 16 16:41:28.179731 containerd[1594]: time="2025-05-16T16:41:28.179685598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\" id:\"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\" pid:3169 exited_at:{seconds:1747413688 nanos:179196144}" May 16 16:41:28.179857 containerd[1594]: time="2025-05-16T16:41:28.179773464Z" level=info msg="received exit event container_id:\"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\" id:\"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\" pid:3169 exited_at:{seconds:1747413688 nanos:179196144}" May 16 16:41:28.200871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39-rootfs.mount: Deactivated successfully. May 16 16:41:29.371780 kubelet[2729]: E0516 16:41:29.371710 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:30.374797 kubelet[2729]: E0516 16:41:30.374762 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:30.376831 containerd[1594]: time="2025-05-16T16:41:30.376789021Z" level=info msg="CreateContainer within sandbox \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 16:41:30.413363 containerd[1594]: time="2025-05-16T16:41:30.413303157Z" level=info msg="Container 9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04: CDI devices from CRI Config.CDIDevices: []" May 16 16:41:30.421518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918049317.mount: Deactivated successfully. May 16 16:41:30.430867 containerd[1594]: time="2025-05-16T16:41:30.430818986Z" level=info msg="CreateContainer within sandbox \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\"" May 16 16:41:30.431569 containerd[1594]: time="2025-05-16T16:41:30.431522252Z" level=info msg="StartContainer for \"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\"" May 16 16:41:30.432345 containerd[1594]: time="2025-05-16T16:41:30.432314306Z" level=info msg="connecting to shim 9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04" address="unix:///run/containerd/s/affea317d323c205e82dc02bab75f0624ec5af9083211c4cc6408c5416c35785" protocol=ttrpc version=3 May 16 16:41:30.453532 systemd[1]: Started cri-containerd-9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04.scope - libcontainer container 9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04. May 16 16:41:30.487398 containerd[1594]: time="2025-05-16T16:41:30.487338806Z" level=info msg="StartContainer for \"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\" returns successfully" May 16 16:41:30.497527 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 16:41:30.498389 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 16:41:30.498721 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 16:41:30.500783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:41:30.502842 systemd[1]: cri-containerd-9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04.scope: Deactivated successfully. May 16 16:41:30.504570 containerd[1594]: time="2025-05-16T16:41:30.504528039Z" level=info msg="received exit event container_id:\"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\" id:\"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\" pid:3223 exited_at:{seconds:1747413690 nanos:504260475}" May 16 16:41:30.505074 containerd[1594]: time="2025-05-16T16:41:30.505042680Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\" id:\"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\" pid:3223 exited_at:{seconds:1747413690 nanos:504260475}" May 16 16:41:30.527161 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:41:30.754880 containerd[1594]: time="2025-05-16T16:41:30.754814492Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:41:30.755648 containerd[1594]: time="2025-05-16T16:41:30.755594823Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 16:41:30.756808 containerd[1594]: time="2025-05-16T16:41:30.756765291Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:41:30.757833 containerd[1594]: time="2025-05-16T16:41:30.757774084Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.945791669s" May 16 16:41:30.757833 containerd[1594]: time="2025-05-16T16:41:30.757817175Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 16:41:30.759924 containerd[1594]: time="2025-05-16T16:41:30.759879114Z" level=info msg="CreateContainer within sandbox \"927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 16:41:30.771637 containerd[1594]: time="2025-05-16T16:41:30.771565027Z" level=info msg="Container 3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39: CDI devices from CRI Config.CDIDevices: []" May 16 16:41:30.780405 containerd[1594]: time="2025-05-16T16:41:30.780317557Z" level=info msg="CreateContainer within sandbox \"927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\"" May 16 16:41:30.780682 containerd[1594]: time="2025-05-16T16:41:30.780651797Z" level=info msg="StartContainer for \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\"" May 16 16:41:30.781898 containerd[1594]: time="2025-05-16T16:41:30.781856279Z" level=info msg="connecting to shim 3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39" address="unix:///run/containerd/s/f3709c4c9819a793d8a72e5f329144e060ed256647d8bd8e40739c433c85027a" protocol=ttrpc version=3 May 16 16:41:30.817627 systemd[1]: Started cri-containerd-3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39.scope - libcontainer container 3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39. May 16 16:41:30.902525 containerd[1594]: time="2025-05-16T16:41:30.902467872Z" level=info msg="StartContainer for \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\" returns successfully" May 16 16:41:31.377967 kubelet[2729]: E0516 16:41:31.377918 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:31.379726 containerd[1594]: time="2025-05-16T16:41:31.379681255Z" level=info msg="CreateContainer within sandbox \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 16:41:31.381792 kubelet[2729]: E0516 16:41:31.381756 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:31.418019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04-rootfs.mount: Deactivated successfully. May 16 16:41:31.458599 containerd[1594]: time="2025-05-16T16:41:31.457330240Z" level=info msg="Container 8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989: CDI devices from CRI Config.CDIDevices: []" May 16 16:41:31.474940 containerd[1594]: time="2025-05-16T16:41:31.474892735Z" level=info msg="CreateContainer within sandbox \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\"" May 16 16:41:31.475537 containerd[1594]: time="2025-05-16T16:41:31.475420812Z" level=info msg="StartContainer for \"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\"" May 16 16:41:31.490275 containerd[1594]: time="2025-05-16T16:41:31.490206561Z" level=info msg="connecting to shim 8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989" address="unix:///run/containerd/s/affea317d323c205e82dc02bab75f0624ec5af9083211c4cc6408c5416c35785" protocol=ttrpc version=3 May 16 16:41:31.529556 systemd[1]: Started cri-containerd-8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989.scope - libcontainer container 8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989. May 16 16:41:31.603204 systemd[1]: cri-containerd-8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989.scope: Deactivated successfully. May 16 16:41:31.604421 containerd[1594]: time="2025-05-16T16:41:31.604352703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\" id:\"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\" pid:3315 exited_at:{seconds:1747413691 nanos:604089267}" May 16 16:41:31.666431 containerd[1594]: time="2025-05-16T16:41:31.666292657Z" level=info msg="received exit event container_id:\"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\" id:\"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\" pid:3315 exited_at:{seconds:1747413691 nanos:604089267}" May 16 16:41:31.668213 containerd[1594]: time="2025-05-16T16:41:31.668156852Z" level=info msg="StartContainer for \"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\" returns successfully" May 16 16:41:31.695320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989-rootfs.mount: Deactivated successfully. May 16 16:41:32.386134 kubelet[2729]: E0516 16:41:32.386062 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:32.386717 kubelet[2729]: E0516 16:41:32.386164 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:32.388353 containerd[1594]: time="2025-05-16T16:41:32.388271844Z" level=info msg="CreateContainer within sandbox \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 16:41:32.399695 containerd[1594]: time="2025-05-16T16:41:32.399649982Z" level=info msg="Container 4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6: CDI devices from CRI Config.CDIDevices: []" May 16 16:41:32.400354 kubelet[2729]: I0516 16:41:32.400093 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-v4nr4" podStartSLOduration=2.1499691260000002 podStartE2EDuration="20.400069812s" podCreationTimestamp="2025-05-16 16:41:12 +0000 UTC" firstStartedPulling="2025-05-16 16:41:12.508481321 +0000 UTC m=+7.521460844" lastFinishedPulling="2025-05-16 16:41:30.758582007 +0000 UTC m=+25.771561530" observedRunningTime="2025-05-16 16:41:31.492407371 +0000 UTC m=+26.505386894" watchObservedRunningTime="2025-05-16 16:41:32.400069812 +0000 UTC m=+27.413049335" May 16 16:41:32.407353 containerd[1594]: time="2025-05-16T16:41:32.407282318Z" level=info msg="CreateContainer within sandbox \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\"" May 16 16:41:32.407910 containerd[1594]: time="2025-05-16T16:41:32.407873793Z" level=info msg="StartContainer for \"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\"" May 16 16:41:32.408730 containerd[1594]: time="2025-05-16T16:41:32.408673560Z" level=info msg="connecting to shim 4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6" address="unix:///run/containerd/s/affea317d323c205e82dc02bab75f0624ec5af9083211c4cc6408c5416c35785" protocol=ttrpc version=3 May 16 16:41:32.433613 systemd[1]: Started cri-containerd-4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6.scope - libcontainer container 4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6. May 16 16:41:32.465508 systemd[1]: cri-containerd-4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6.scope: Deactivated successfully. May 16 16:41:32.466937 containerd[1594]: time="2025-05-16T16:41:32.466883083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\" id:\"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\" pid:3354 exited_at:{seconds:1747413692 nanos:465972327}" May 16 16:41:32.467575 containerd[1594]: time="2025-05-16T16:41:32.467493434Z" level=info msg="received exit event container_id:\"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\" id:\"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\" pid:3354 exited_at:{seconds:1747413692 nanos:465972327}" May 16 16:41:32.469848 containerd[1594]: time="2025-05-16T16:41:32.469767521Z" level=info msg="StartContainer for \"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\" returns successfully" May 16 16:41:32.493906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6-rootfs.mount: Deactivated successfully. May 16 16:41:33.391118 kubelet[2729]: E0516 16:41:33.391082 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:33.394649 containerd[1594]: time="2025-05-16T16:41:33.394598703Z" level=info msg="CreateContainer within sandbox \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 16:41:33.681499 systemd[1]: Started sshd@7-10.0.0.76:22-10.0.0.1:41028.service - OpenSSH per-connection server daemon (10.0.0.1:41028). May 16 16:41:33.712227 containerd[1594]: time="2025-05-16T16:41:33.711644262Z" level=info msg="Container 18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af: CDI devices from CRI Config.CDIDevices: []" May 16 16:41:33.858790 sshd[3380]: Accepted publickey for core from 10.0.0.1 port 41028 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:41:33.860599 sshd-session[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:41:33.865582 systemd-logind[1572]: New session 8 of user core. May 16 16:41:33.875475 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 16:41:33.882498 containerd[1594]: time="2025-05-16T16:41:33.882451640Z" level=info msg="CreateContainer within sandbox \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\"" May 16 16:41:33.883197 containerd[1594]: time="2025-05-16T16:41:33.882982460Z" level=info msg="StartContainer for \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\"" May 16 16:41:33.883812 containerd[1594]: time="2025-05-16T16:41:33.883780033Z" level=info msg="connecting to shim 18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af" address="unix:///run/containerd/s/affea317d323c205e82dc02bab75f0624ec5af9083211c4cc6408c5416c35785" protocol=ttrpc version=3 May 16 16:41:33.913677 systemd[1]: Started cri-containerd-18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af.scope - libcontainer container 18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af. May 16 16:41:33.956933 containerd[1594]: time="2025-05-16T16:41:33.956904028Z" level=info msg="StartContainer for \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" returns successfully" May 16 16:41:34.038726 containerd[1594]: time="2025-05-16T16:41:34.038682450Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" id:\"dbac9dacc582d8967073c3fb8e569b14c69679449b575d08adcf14599c4ad676\" pid:3437 exited_at:{seconds:1747413694 nanos:37726139}" May 16 16:41:34.048840 sshd[3382]: Connection closed by 10.0.0.1 port 41028 May 16 16:41:34.049502 sshd-session[3380]: pam_unix(sshd:session): session closed for user core May 16 16:41:34.053578 systemd[1]: sshd@7-10.0.0.76:22-10.0.0.1:41028.service: Deactivated successfully. May 16 16:41:34.056438 systemd[1]: session-8.scope: Deactivated successfully. May 16 16:41:34.059157 systemd-logind[1572]: Session 8 logged out. Waiting for processes to exit. May 16 16:41:34.061686 systemd-logind[1572]: Removed session 8. May 16 16:41:34.067414 kubelet[2729]: I0516 16:41:34.067350 2729 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 16:41:34.111549 systemd[1]: Created slice kubepods-burstable-podc0f1c26b_e4eb_4667_a2cb_6d3b785f6896.slice - libcontainer container kubepods-burstable-podc0f1c26b_e4eb_4667_a2cb_6d3b785f6896.slice. May 16 16:41:34.145067 systemd[1]: Created slice kubepods-burstable-pod649a12b2_bd93_48f2_a57d_37ac009f4532.slice - libcontainer container kubepods-burstable-pod649a12b2_bd93_48f2_a57d_37ac009f4532.slice. May 16 16:41:34.251751 kubelet[2729]: I0516 16:41:34.251550 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/649a12b2-bd93-48f2-a57d-37ac009f4532-config-volume\") pod \"coredns-7c65d6cfc9-xfnmq\" (UID: \"649a12b2-bd93-48f2-a57d-37ac009f4532\") " pod="kube-system/coredns-7c65d6cfc9-xfnmq" May 16 16:41:34.251751 kubelet[2729]: I0516 16:41:34.251608 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0f1c26b-e4eb-4667-a2cb-6d3b785f6896-config-volume\") pod \"coredns-7c65d6cfc9-d6fnw\" (UID: \"c0f1c26b-e4eb-4667-a2cb-6d3b785f6896\") " pod="kube-system/coredns-7c65d6cfc9-d6fnw" May 16 16:41:34.251751 kubelet[2729]: I0516 16:41:34.251630 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzmkh\" (UniqueName: \"kubernetes.io/projected/649a12b2-bd93-48f2-a57d-37ac009f4532-kube-api-access-xzmkh\") pod \"coredns-7c65d6cfc9-xfnmq\" (UID: \"649a12b2-bd93-48f2-a57d-37ac009f4532\") " pod="kube-system/coredns-7c65d6cfc9-xfnmq" May 16 16:41:34.251751 kubelet[2729]: I0516 16:41:34.251645 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdtg8\" (UniqueName: \"kubernetes.io/projected/c0f1c26b-e4eb-4667-a2cb-6d3b785f6896-kube-api-access-rdtg8\") pod \"coredns-7c65d6cfc9-d6fnw\" (UID: \"c0f1c26b-e4eb-4667-a2cb-6d3b785f6896\") " pod="kube-system/coredns-7c65d6cfc9-d6fnw" May 16 16:41:34.397728 kubelet[2729]: E0516 16:41:34.397700 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:34.417567 kubelet[2729]: E0516 16:41:34.417532 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:34.418863 containerd[1594]: time="2025-05-16T16:41:34.418828308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d6fnw,Uid:c0f1c26b-e4eb-4667-a2cb-6d3b785f6896,Namespace:kube-system,Attempt:0,}" May 16 16:41:34.448629 kubelet[2729]: E0516 16:41:34.448583 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:34.449224 containerd[1594]: time="2025-05-16T16:41:34.449108682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xfnmq,Uid:649a12b2-bd93-48f2-a57d-37ac009f4532,Namespace:kube-system,Attempt:0,}" May 16 16:41:35.399058 kubelet[2729]: E0516 16:41:35.399011 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:36.171633 systemd-networkd[1497]: cilium_host: Link UP May 16 16:41:36.172080 systemd-networkd[1497]: cilium_net: Link UP May 16 16:41:36.172344 systemd-networkd[1497]: cilium_net: Gained carrier May 16 16:41:36.172633 systemd-networkd[1497]: cilium_host: Gained carrier May 16 16:41:36.267703 systemd-networkd[1497]: cilium_vxlan: Link UP May 16 16:41:36.267714 systemd-networkd[1497]: cilium_vxlan: Gained carrier May 16 16:41:36.400459 kubelet[2729]: E0516 16:41:36.400426 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:36.472410 kernel: NET: Registered PF_ALG protocol family May 16 16:41:36.513551 systemd-networkd[1497]: cilium_net: Gained IPv6LL May 16 16:41:36.649517 systemd-networkd[1497]: cilium_host: Gained IPv6LL May 16 16:41:37.104000 systemd-networkd[1497]: lxc_health: Link UP May 16 16:41:37.116205 systemd-networkd[1497]: lxc_health: Gained carrier May 16 16:41:37.455459 systemd-networkd[1497]: lxcf5d07aea26b9: Link UP May 16 16:41:37.457398 kernel: eth0: renamed from tmp7f234 May 16 16:41:37.458028 systemd-networkd[1497]: lxcf5d07aea26b9: Gained carrier May 16 16:41:37.491360 systemd-networkd[1497]: lxcb273ace77b6f: Link UP May 16 16:41:37.501403 kernel: eth0: renamed from tmpa0a09 May 16 16:41:37.505344 systemd-networkd[1497]: lxcb273ace77b6f: Gained carrier May 16 16:41:37.906014 systemd-networkd[1497]: cilium_vxlan: Gained IPv6LL May 16 16:41:38.161628 systemd-networkd[1497]: lxc_health: Gained IPv6LL May 16 16:41:38.212764 kubelet[2729]: E0516 16:41:38.212731 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:38.231161 kubelet[2729]: I0516 16:41:38.231081 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gd9p2" podStartSLOduration=12.900727973 podStartE2EDuration="28.231065051s" podCreationTimestamp="2025-05-16 16:41:10 +0000 UTC" firstStartedPulling="2025-05-16 16:41:12.481420471 +0000 UTC m=+7.494399995" lastFinishedPulling="2025-05-16 16:41:27.81175752 +0000 UTC m=+22.824737073" observedRunningTime="2025-05-16 16:41:34.416963014 +0000 UTC m=+29.429942557" watchObservedRunningTime="2025-05-16 16:41:38.231065051 +0000 UTC m=+33.244044574" May 16 16:41:38.403820 kubelet[2729]: E0516 16:41:38.403787 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:38.801591 systemd-networkd[1497]: lxcb273ace77b6f: Gained IPv6LL May 16 16:41:39.065256 systemd[1]: Started sshd@8-10.0.0.76:22-10.0.0.1:41040.service - OpenSSH per-connection server daemon (10.0.0.1:41040). May 16 16:41:39.116213 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 41040 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:41:39.117990 sshd-session[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:41:39.121490 systemd-networkd[1497]: lxcf5d07aea26b9: Gained IPv6LL May 16 16:41:39.125427 systemd-logind[1572]: New session 9 of user core. May 16 16:41:39.135521 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 16:41:39.276173 sshd[3916]: Connection closed by 10.0.0.1 port 41040 May 16 16:41:39.276680 sshd-session[3914]: pam_unix(sshd:session): session closed for user core May 16 16:41:39.281051 systemd[1]: sshd@8-10.0.0.76:22-10.0.0.1:41040.service: Deactivated successfully. May 16 16:41:39.283512 systemd[1]: session-9.scope: Deactivated successfully. May 16 16:41:39.284228 systemd-logind[1572]: Session 9 logged out. Waiting for processes to exit. May 16 16:41:39.285428 systemd-logind[1572]: Removed session 9. May 16 16:41:39.405956 kubelet[2729]: E0516 16:41:39.405843 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:41.729401 containerd[1594]: time="2025-05-16T16:41:41.729328472Z" level=info msg="connecting to shim a0a09b31a010f6e3a87a2250a56bb5f982ffcb253d487d2f6cd474a29a06078e" address="unix:///run/containerd/s/380d9211a0338e848c3130e2797de33e7078e5ce7e4a42a54cd369462a15f6c3" namespace=k8s.io protocol=ttrpc version=3 May 16 16:41:41.731707 containerd[1594]: time="2025-05-16T16:41:41.731673884Z" level=info msg="connecting to shim 7f234592b1dfea1c05c307d0444e7b6adde893b0ef5b1bfb7d26d6aacdcf4cae" address="unix:///run/containerd/s/8afe28bef13366d7086c767641bd2c8635193a180487ea32bb47ad238a45abd6" namespace=k8s.io protocol=ttrpc version=3 May 16 16:41:41.761518 systemd[1]: Started cri-containerd-7f234592b1dfea1c05c307d0444e7b6adde893b0ef5b1bfb7d26d6aacdcf4cae.scope - libcontainer container 7f234592b1dfea1c05c307d0444e7b6adde893b0ef5b1bfb7d26d6aacdcf4cae. May 16 16:41:41.763278 systemd[1]: Started cri-containerd-a0a09b31a010f6e3a87a2250a56bb5f982ffcb253d487d2f6cd474a29a06078e.scope - libcontainer container a0a09b31a010f6e3a87a2250a56bb5f982ffcb253d487d2f6cd474a29a06078e. May 16 16:41:41.781140 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:41:41.781407 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:41:41.819757 containerd[1594]: time="2025-05-16T16:41:41.819686937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d6fnw,Uid:c0f1c26b-e4eb-4667-a2cb-6d3b785f6896,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f234592b1dfea1c05c307d0444e7b6adde893b0ef5b1bfb7d26d6aacdcf4cae\"" May 16 16:41:41.820491 kubelet[2729]: E0516 16:41:41.820465 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:41.824028 containerd[1594]: time="2025-05-16T16:41:41.823927653Z" level=info msg="CreateContainer within sandbox \"7f234592b1dfea1c05c307d0444e7b6adde893b0ef5b1bfb7d26d6aacdcf4cae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 16:41:41.830937 containerd[1594]: time="2025-05-16T16:41:41.830889638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xfnmq,Uid:649a12b2-bd93-48f2-a57d-37ac009f4532,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0a09b31a010f6e3a87a2250a56bb5f982ffcb253d487d2f6cd474a29a06078e\"" May 16 16:41:41.831806 kubelet[2729]: E0516 16:41:41.831782 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:41.834348 containerd[1594]: time="2025-05-16T16:41:41.834314660Z" level=info msg="CreateContainer within sandbox \"a0a09b31a010f6e3a87a2250a56bb5f982ffcb253d487d2f6cd474a29a06078e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 16:41:41.838017 containerd[1594]: time="2025-05-16T16:41:41.837980735Z" level=info msg="Container 9d1edca5a2a556d3464d3be731cf6fc092feea929a4f53626aa0b427ca455dc2: CDI devices from CRI Config.CDIDevices: []" May 16 16:41:41.846607 containerd[1594]: time="2025-05-16T16:41:41.846557448Z" level=info msg="Container db4946f45426ba45c51db1540305b86aa7b037816a5d641c215b1f7582672ff7: CDI devices from CRI Config.CDIDevices: []" May 16 16:41:41.849234 containerd[1594]: time="2025-05-16T16:41:41.849184548Z" level=info msg="CreateContainer within sandbox \"7f234592b1dfea1c05c307d0444e7b6adde893b0ef5b1bfb7d26d6aacdcf4cae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d1edca5a2a556d3464d3be731cf6fc092feea929a4f53626aa0b427ca455dc2\"" May 16 16:41:41.849650 containerd[1594]: time="2025-05-16T16:41:41.849626390Z" level=info msg="StartContainer for \"9d1edca5a2a556d3464d3be731cf6fc092feea929a4f53626aa0b427ca455dc2\"" May 16 16:41:41.850404 containerd[1594]: time="2025-05-16T16:41:41.850350682Z" level=info msg="connecting to shim 9d1edca5a2a556d3464d3be731cf6fc092feea929a4f53626aa0b427ca455dc2" address="unix:///run/containerd/s/8afe28bef13366d7086c767641bd2c8635193a180487ea32bb47ad238a45abd6" protocol=ttrpc version=3 May 16 16:41:41.853704 containerd[1594]: time="2025-05-16T16:41:41.853669294Z" level=info msg="CreateContainer within sandbox \"a0a09b31a010f6e3a87a2250a56bb5f982ffcb253d487d2f6cd474a29a06078e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db4946f45426ba45c51db1540305b86aa7b037816a5d641c215b1f7582672ff7\"" May 16 16:41:41.854581 containerd[1594]: time="2025-05-16T16:41:41.854485610Z" level=info msg="StartContainer for \"db4946f45426ba45c51db1540305b86aa7b037816a5d641c215b1f7582672ff7\"" May 16 16:41:41.855721 containerd[1594]: time="2025-05-16T16:41:41.855692279Z" level=info msg="connecting to shim db4946f45426ba45c51db1540305b86aa7b037816a5d641c215b1f7582672ff7" address="unix:///run/containerd/s/380d9211a0338e848c3130e2797de33e7078e5ce7e4a42a54cd369462a15f6c3" protocol=ttrpc version=3 May 16 16:41:41.875515 systemd[1]: Started cri-containerd-9d1edca5a2a556d3464d3be731cf6fc092feea929a4f53626aa0b427ca455dc2.scope - libcontainer container 9d1edca5a2a556d3464d3be731cf6fc092feea929a4f53626aa0b427ca455dc2. May 16 16:41:41.878871 systemd[1]: Started cri-containerd-db4946f45426ba45c51db1540305b86aa7b037816a5d641c215b1f7582672ff7.scope - libcontainer container db4946f45426ba45c51db1540305b86aa7b037816a5d641c215b1f7582672ff7. May 16 16:41:41.914965 containerd[1594]: time="2025-05-16T16:41:41.914926452Z" level=info msg="StartContainer for \"9d1edca5a2a556d3464d3be731cf6fc092feea929a4f53626aa0b427ca455dc2\" returns successfully" May 16 16:41:41.915225 containerd[1594]: time="2025-05-16T16:41:41.915189587Z" level=info msg="StartContainer for \"db4946f45426ba45c51db1540305b86aa7b037816a5d641c215b1f7582672ff7\" returns successfully" May 16 16:41:42.412432 kubelet[2729]: E0516 16:41:42.412202 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:42.415280 kubelet[2729]: E0516 16:41:42.415242 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:42.553256 kubelet[2729]: I0516 16:41:42.552972 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xfnmq" podStartSLOduration=30.552957372 podStartE2EDuration="30.552957372s" podCreationTimestamp="2025-05-16 16:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:41:42.552103446 +0000 UTC m=+37.565082969" watchObservedRunningTime="2025-05-16 16:41:42.552957372 +0000 UTC m=+37.565936895" May 16 16:41:43.024456 kubelet[2729]: I0516 16:41:43.024384 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-d6fnw" podStartSLOduration=31.024348498 podStartE2EDuration="31.024348498s" podCreationTimestamp="2025-05-16 16:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:41:43.024042303 +0000 UTC m=+38.037021826" watchObservedRunningTime="2025-05-16 16:41:43.024348498 +0000 UTC m=+38.037328021" May 16 16:41:43.417537 kubelet[2729]: E0516 16:41:43.417399 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:43.417874 kubelet[2729]: E0516 16:41:43.417796 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:44.292195 systemd[1]: Started sshd@9-10.0.0.76:22-10.0.0.1:45090.service - OpenSSH per-connection server daemon (10.0.0.1:45090). May 16 16:41:44.349105 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 45090 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:41:44.351044 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:41:44.355724 systemd-logind[1572]: New session 10 of user core. May 16 16:41:44.363492 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 16:41:44.419535 kubelet[2729]: E0516 16:41:44.419495 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:44.420172 kubelet[2729]: E0516 16:41:44.419734 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:41:44.485326 sshd[4114]: Connection closed by 10.0.0.1 port 45090 May 16 16:41:44.485628 sshd-session[4112]: pam_unix(sshd:session): session closed for user core May 16 16:41:44.490152 systemd[1]: sshd@9-10.0.0.76:22-10.0.0.1:45090.service: Deactivated successfully. May 16 16:41:44.492083 systemd[1]: session-10.scope: Deactivated successfully. May 16 16:41:44.492846 systemd-logind[1572]: Session 10 logged out. Waiting for processes to exit. May 16 16:41:44.493955 systemd-logind[1572]: Removed session 10. May 16 16:41:49.502419 systemd[1]: Started sshd@10-10.0.0.76:22-10.0.0.1:45106.service - OpenSSH per-connection server daemon (10.0.0.1:45106). May 16 16:41:49.539671 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 45106 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:41:49.541057 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:41:49.545334 systemd-logind[1572]: New session 11 of user core. May 16 16:41:49.559522 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 16:41:49.665155 sshd[4131]: Connection closed by 10.0.0.1 port 45106 May 16 16:41:49.665499 sshd-session[4129]: pam_unix(sshd:session): session closed for user core May 16 16:41:49.669212 systemd[1]: sshd@10-10.0.0.76:22-10.0.0.1:45106.service: Deactivated successfully. May 16 16:41:49.671338 systemd[1]: session-11.scope: Deactivated successfully. May 16 16:41:49.673870 systemd-logind[1572]: Session 11 logged out. Waiting for processes to exit. May 16 16:41:49.674889 systemd-logind[1572]: Removed session 11. May 16 16:41:54.687494 systemd[1]: Started sshd@11-10.0.0.76:22-10.0.0.1:43620.service - OpenSSH per-connection server daemon (10.0.0.1:43620). May 16 16:41:54.743768 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 43620 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:41:54.745106 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:41:54.749356 systemd-logind[1572]: New session 12 of user core. May 16 16:41:54.758494 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 16:41:54.862655 sshd[4147]: Connection closed by 10.0.0.1 port 43620 May 16 16:41:54.862966 sshd-session[4145]: pam_unix(sshd:session): session closed for user core May 16 16:41:54.874175 systemd[1]: sshd@11-10.0.0.76:22-10.0.0.1:43620.service: Deactivated successfully. May 16 16:41:54.876178 systemd[1]: session-12.scope: Deactivated successfully. May 16 16:41:54.876920 systemd-logind[1572]: Session 12 logged out. Waiting for processes to exit. May 16 16:41:54.880136 systemd[1]: Started sshd@12-10.0.0.76:22-10.0.0.1:43636.service - OpenSSH per-connection server daemon (10.0.0.1:43636). May 16 16:41:54.880773 systemd-logind[1572]: Removed session 12. May 16 16:41:54.934815 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 43636 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:41:54.936543 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:41:54.941385 systemd-logind[1572]: New session 13 of user core. May 16 16:41:54.957501 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 16:41:55.101927 sshd[4163]: Connection closed by 10.0.0.1 port 43636 May 16 16:41:55.102335 sshd-session[4161]: pam_unix(sshd:session): session closed for user core May 16 16:41:55.114319 systemd[1]: sshd@12-10.0.0.76:22-10.0.0.1:43636.service: Deactivated successfully. May 16 16:41:55.119414 systemd[1]: session-13.scope: Deactivated successfully. May 16 16:41:55.121732 systemd-logind[1572]: Session 13 logged out. Waiting for processes to exit. May 16 16:41:55.125325 systemd-logind[1572]: Removed session 13. May 16 16:41:55.128599 systemd[1]: Started sshd@13-10.0.0.76:22-10.0.0.1:43642.service - OpenSSH per-connection server daemon (10.0.0.1:43642). May 16 16:41:55.189640 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 43642 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:41:55.191152 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:41:55.195448 systemd-logind[1572]: New session 14 of user core. May 16 16:41:55.205512 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 16:41:55.340014 sshd[4176]: Connection closed by 10.0.0.1 port 43642 May 16 16:41:55.340331 sshd-session[4174]: pam_unix(sshd:session): session closed for user core May 16 16:41:55.344471 systemd[1]: sshd@13-10.0.0.76:22-10.0.0.1:43642.service: Deactivated successfully. May 16 16:41:55.346470 systemd[1]: session-14.scope: Deactivated successfully. May 16 16:41:55.347198 systemd-logind[1572]: Session 14 logged out. Waiting for processes to exit. May 16 16:41:55.348517 systemd-logind[1572]: Removed session 14. May 16 16:42:00.352026 systemd[1]: Started sshd@14-10.0.0.76:22-10.0.0.1:43654.service - OpenSSH per-connection server daemon (10.0.0.1:43654). May 16 16:42:00.406571 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 43654 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:00.407826 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:00.411775 systemd-logind[1572]: New session 15 of user core. May 16 16:42:00.422509 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 16:42:00.527330 sshd[4195]: Connection closed by 10.0.0.1 port 43654 May 16 16:42:00.527678 sshd-session[4193]: pam_unix(sshd:session): session closed for user core May 16 16:42:00.531002 systemd[1]: sshd@14-10.0.0.76:22-10.0.0.1:43654.service: Deactivated successfully. May 16 16:42:00.533069 systemd[1]: session-15.scope: Deactivated successfully. May 16 16:42:00.534881 systemd-logind[1572]: Session 15 logged out. Waiting for processes to exit. May 16 16:42:00.536086 systemd-logind[1572]: Removed session 15. May 16 16:42:05.545683 systemd[1]: Started sshd@15-10.0.0.76:22-10.0.0.1:50242.service - OpenSSH per-connection server daemon (10.0.0.1:50242). May 16 16:42:05.600496 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 50242 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:05.602184 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:05.606898 systemd-logind[1572]: New session 16 of user core. May 16 16:42:05.616561 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 16:42:05.724835 sshd[4212]: Connection closed by 10.0.0.1 port 50242 May 16 16:42:05.725188 sshd-session[4210]: pam_unix(sshd:session): session closed for user core May 16 16:42:05.733856 systemd[1]: sshd@15-10.0.0.76:22-10.0.0.1:50242.service: Deactivated successfully. May 16 16:42:05.735606 systemd[1]: session-16.scope: Deactivated successfully. May 16 16:42:05.736460 systemd-logind[1572]: Session 16 logged out. Waiting for processes to exit. May 16 16:42:05.739429 systemd[1]: Started sshd@16-10.0.0.76:22-10.0.0.1:50254.service - OpenSSH per-connection server daemon (10.0.0.1:50254). May 16 16:42:05.740241 systemd-logind[1572]: Removed session 16. May 16 16:42:05.787321 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 50254 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:05.788819 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:05.793301 systemd-logind[1572]: New session 17 of user core. May 16 16:42:05.799508 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 16:42:06.228139 sshd[4228]: Connection closed by 10.0.0.1 port 50254 May 16 16:42:06.228453 sshd-session[4226]: pam_unix(sshd:session): session closed for user core May 16 16:42:06.237723 systemd[1]: sshd@16-10.0.0.76:22-10.0.0.1:50254.service: Deactivated successfully. May 16 16:42:06.239319 systemd[1]: session-17.scope: Deactivated successfully. May 16 16:42:06.240016 systemd-logind[1572]: Session 17 logged out. Waiting for processes to exit. May 16 16:42:06.242845 systemd[1]: Started sshd@17-10.0.0.76:22-10.0.0.1:50262.service - OpenSSH per-connection server daemon (10.0.0.1:50262). May 16 16:42:06.243448 systemd-logind[1572]: Removed session 17. May 16 16:42:06.295459 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 50262 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:06.296732 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:06.300970 systemd-logind[1572]: New session 18 of user core. May 16 16:42:06.310491 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 16:42:07.768273 sshd[4241]: Connection closed by 10.0.0.1 port 50262 May 16 16:42:07.768814 sshd-session[4239]: pam_unix(sshd:session): session closed for user core May 16 16:42:07.778517 systemd[1]: sshd@17-10.0.0.76:22-10.0.0.1:50262.service: Deactivated successfully. May 16 16:42:07.780494 systemd[1]: session-18.scope: Deactivated successfully. May 16 16:42:07.781404 systemd-logind[1572]: Session 18 logged out. Waiting for processes to exit. May 16 16:42:07.785158 systemd[1]: Started sshd@18-10.0.0.76:22-10.0.0.1:50268.service - OpenSSH per-connection server daemon (10.0.0.1:50268). May 16 16:42:07.786035 systemd-logind[1572]: Removed session 18. May 16 16:42:07.833844 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 50268 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:07.835275 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:07.840027 systemd-logind[1572]: New session 19 of user core. May 16 16:42:07.847501 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 16:42:08.054578 sshd[4265]: Connection closed by 10.0.0.1 port 50268 May 16 16:42:08.055503 sshd-session[4263]: pam_unix(sshd:session): session closed for user core May 16 16:42:08.069497 systemd[1]: sshd@18-10.0.0.76:22-10.0.0.1:50268.service: Deactivated successfully. May 16 16:42:08.073024 systemd[1]: session-19.scope: Deactivated successfully. May 16 16:42:08.073989 systemd-logind[1572]: Session 19 logged out. Waiting for processes to exit. May 16 16:42:08.078116 systemd[1]: Started sshd@19-10.0.0.76:22-10.0.0.1:50276.service - OpenSSH per-connection server daemon (10.0.0.1:50276). May 16 16:42:08.080125 systemd-logind[1572]: Removed session 19. May 16 16:42:08.132052 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 50276 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:08.133489 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:08.138074 systemd-logind[1572]: New session 20 of user core. May 16 16:42:08.150536 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 16:42:08.254985 sshd[4278]: Connection closed by 10.0.0.1 port 50276 May 16 16:42:08.255286 sshd-session[4276]: pam_unix(sshd:session): session closed for user core May 16 16:42:08.259697 systemd[1]: sshd@19-10.0.0.76:22-10.0.0.1:50276.service: Deactivated successfully. May 16 16:42:08.261555 systemd[1]: session-20.scope: Deactivated successfully. May 16 16:42:08.262387 systemd-logind[1572]: Session 20 logged out. Waiting for processes to exit. May 16 16:42:08.263552 systemd-logind[1572]: Removed session 20. May 16 16:42:13.268173 systemd[1]: Started sshd@20-10.0.0.76:22-10.0.0.1:50278.service - OpenSSH per-connection server daemon (10.0.0.1:50278). May 16 16:42:13.319229 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 50278 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:13.320956 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:13.325734 systemd-logind[1572]: New session 21 of user core. May 16 16:42:13.343712 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 16:42:13.457204 sshd[4296]: Connection closed by 10.0.0.1 port 50278 May 16 16:42:13.457520 sshd-session[4294]: pam_unix(sshd:session): session closed for user core May 16 16:42:13.460936 systemd[1]: sshd@20-10.0.0.76:22-10.0.0.1:50278.service: Deactivated successfully. May 16 16:42:13.462857 systemd[1]: session-21.scope: Deactivated successfully. May 16 16:42:13.464428 systemd-logind[1572]: Session 21 logged out. Waiting for processes to exit. May 16 16:42:13.465771 systemd-logind[1572]: Removed session 21. May 16 16:42:16.081893 kubelet[2729]: E0516 16:42:16.081840 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:42:17.069676 kubelet[2729]: E0516 16:42:17.069607 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:42:18.474014 systemd[1]: Started sshd@21-10.0.0.76:22-10.0.0.1:42268.service - OpenSSH per-connection server daemon (10.0.0.1:42268). May 16 16:42:18.525552 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 42268 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:18.527085 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:18.531360 systemd-logind[1572]: New session 22 of user core. May 16 16:42:18.539493 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 16:42:18.648284 sshd[4314]: Connection closed by 10.0.0.1 port 42268 May 16 16:42:18.648606 sshd-session[4312]: pam_unix(sshd:session): session closed for user core May 16 16:42:18.652182 systemd[1]: sshd@21-10.0.0.76:22-10.0.0.1:42268.service: Deactivated successfully. May 16 16:42:18.654038 systemd[1]: session-22.scope: Deactivated successfully. May 16 16:42:18.654983 systemd-logind[1572]: Session 22 logged out. Waiting for processes to exit. May 16 16:42:18.656105 systemd-logind[1572]: Removed session 22. May 16 16:42:22.068831 kubelet[2729]: E0516 16:42:22.068782 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:42:23.665842 systemd[1]: Started sshd@22-10.0.0.76:22-10.0.0.1:39418.service - OpenSSH per-connection server daemon (10.0.0.1:39418). May 16 16:42:23.721301 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 39418 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:23.722896 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:23.727182 systemd-logind[1572]: New session 23 of user core. May 16 16:42:23.736552 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 16:42:23.852491 sshd[4330]: Connection closed by 10.0.0.1 port 39418 May 16 16:42:23.852806 sshd-session[4328]: pam_unix(sshd:session): session closed for user core May 16 16:42:23.856361 systemd[1]: sshd@22-10.0.0.76:22-10.0.0.1:39418.service: Deactivated successfully. May 16 16:42:23.858822 systemd[1]: session-23.scope: Deactivated successfully. May 16 16:42:23.860610 systemd-logind[1572]: Session 23 logged out. Waiting for processes to exit. May 16 16:42:23.862515 systemd-logind[1572]: Removed session 23. May 16 16:42:28.865386 systemd[1]: Started sshd@23-10.0.0.76:22-10.0.0.1:39434.service - OpenSSH per-connection server daemon (10.0.0.1:39434). May 16 16:42:28.917286 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 39434 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:28.918511 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:28.922719 systemd-logind[1572]: New session 24 of user core. May 16 16:42:28.932525 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 16:42:29.039441 sshd[4353]: Connection closed by 10.0.0.1 port 39434 May 16 16:42:29.039743 sshd-session[4351]: pam_unix(sshd:session): session closed for user core May 16 16:42:29.043662 systemd[1]: sshd@23-10.0.0.76:22-10.0.0.1:39434.service: Deactivated successfully. May 16 16:42:29.045787 systemd[1]: session-24.scope: Deactivated successfully. May 16 16:42:29.046612 systemd-logind[1572]: Session 24 logged out. Waiting for processes to exit. May 16 16:42:29.047899 systemd-logind[1572]: Removed session 24. May 16 16:42:34.054201 systemd[1]: Started sshd@24-10.0.0.76:22-10.0.0.1:33806.service - OpenSSH per-connection server daemon (10.0.0.1:33806). May 16 16:42:34.099982 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 33806 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:34.101259 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:34.105395 systemd-logind[1572]: New session 25 of user core. May 16 16:42:34.116507 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 16:42:34.237437 sshd[4369]: Connection closed by 10.0.0.1 port 33806 May 16 16:42:34.237743 sshd-session[4367]: pam_unix(sshd:session): session closed for user core May 16 16:42:34.240704 systemd[1]: sshd@24-10.0.0.76:22-10.0.0.1:33806.service: Deactivated successfully. May 16 16:42:34.242636 systemd[1]: session-25.scope: Deactivated successfully. May 16 16:42:34.244131 systemd-logind[1572]: Session 25 logged out. Waiting for processes to exit. May 16 16:42:34.245876 systemd-logind[1572]: Removed session 25. May 16 16:42:38.068694 kubelet[2729]: E0516 16:42:38.068649 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:42:39.251006 systemd[1]: Started sshd@25-10.0.0.76:22-10.0.0.1:33812.service - OpenSSH per-connection server daemon (10.0.0.1:33812). May 16 16:42:39.298489 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 33812 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:39.299760 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:39.303596 systemd-logind[1572]: New session 26 of user core. May 16 16:42:39.313499 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 16:42:39.420385 sshd[4384]: Connection closed by 10.0.0.1 port 33812 May 16 16:42:39.420673 sshd-session[4382]: pam_unix(sshd:session): session closed for user core May 16 16:42:39.424256 systemd[1]: sshd@25-10.0.0.76:22-10.0.0.1:33812.service: Deactivated successfully. May 16 16:42:39.426309 systemd[1]: session-26.scope: Deactivated successfully. May 16 16:42:39.427881 systemd-logind[1572]: Session 26 logged out. Waiting for processes to exit. May 16 16:42:39.429328 systemd-logind[1572]: Removed session 26. May 16 16:42:44.439140 systemd[1]: Started sshd@26-10.0.0.76:22-10.0.0.1:39372.service - OpenSSH per-connection server daemon (10.0.0.1:39372). May 16 16:42:44.489300 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 39372 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:44.490691 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:44.494774 systemd-logind[1572]: New session 27 of user core. May 16 16:42:44.508526 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 16:42:44.610637 sshd[4402]: Connection closed by 10.0.0.1 port 39372 May 16 16:42:44.610951 sshd-session[4400]: pam_unix(sshd:session): session closed for user core May 16 16:42:44.615093 systemd[1]: sshd@26-10.0.0.76:22-10.0.0.1:39372.service: Deactivated successfully. May 16 16:42:44.616939 systemd[1]: session-27.scope: Deactivated successfully. May 16 16:42:44.617746 systemd-logind[1572]: Session 27 logged out. Waiting for processes to exit. May 16 16:42:44.619115 systemd-logind[1572]: Removed session 27. May 16 16:42:49.627290 systemd[1]: Started sshd@27-10.0.0.76:22-10.0.0.1:39378.service - OpenSSH per-connection server daemon (10.0.0.1:39378). May 16 16:42:49.681766 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 39378 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:49.683579 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:49.687862 systemd-logind[1572]: New session 28 of user core. May 16 16:42:49.697569 systemd[1]: Started session-28.scope - Session 28 of User core. May 16 16:42:49.812756 sshd[4419]: Connection closed by 10.0.0.1 port 39378 May 16 16:42:49.813069 sshd-session[4415]: pam_unix(sshd:session): session closed for user core May 16 16:42:49.817106 systemd[1]: sshd@27-10.0.0.76:22-10.0.0.1:39378.service: Deactivated successfully. May 16 16:42:49.819489 systemd[1]: session-28.scope: Deactivated successfully. May 16 16:42:49.820267 systemd-logind[1572]: Session 28 logged out. Waiting for processes to exit. May 16 16:42:49.821586 systemd-logind[1572]: Removed session 28. May 16 16:42:54.837924 systemd[1]: Started sshd@28-10.0.0.76:22-10.0.0.1:60612.service - OpenSSH per-connection server daemon (10.0.0.1:60612). May 16 16:42:54.888774 sshd[4432]: Accepted publickey for core from 10.0.0.1 port 60612 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:42:54.890057 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:42:54.894735 systemd-logind[1572]: New session 29 of user core. May 16 16:42:54.908487 systemd[1]: Started session-29.scope - Session 29 of User core. May 16 16:42:55.020229 sshd[4435]: Connection closed by 10.0.0.1 port 60612 May 16 16:42:55.020547 sshd-session[4432]: pam_unix(sshd:session): session closed for user core May 16 16:42:55.025143 systemd[1]: sshd@28-10.0.0.76:22-10.0.0.1:60612.service: Deactivated successfully. May 16 16:42:55.027575 systemd[1]: session-29.scope: Deactivated successfully. May 16 16:42:55.028290 systemd-logind[1572]: Session 29 logged out. Waiting for processes to exit. May 16 16:42:55.029524 systemd-logind[1572]: Removed session 29. May 16 16:42:58.069045 kubelet[2729]: E0516 16:42:58.069000 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:43:00.041332 systemd[1]: Started sshd@29-10.0.0.76:22-10.0.0.1:60622.service - OpenSSH per-connection server daemon (10.0.0.1:60622). May 16 16:43:00.089772 sshd[4449]: Accepted publickey for core from 10.0.0.1 port 60622 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:43:00.091278 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:43:00.096265 systemd-logind[1572]: New session 30 of user core. May 16 16:43:00.105520 systemd[1]: Started session-30.scope - Session 30 of User core. May 16 16:43:00.210477 sshd[4451]: Connection closed by 10.0.0.1 port 60622 May 16 16:43:00.210745 sshd-session[4449]: pam_unix(sshd:session): session closed for user core May 16 16:43:00.214585 systemd[1]: sshd@29-10.0.0.76:22-10.0.0.1:60622.service: Deactivated successfully. May 16 16:43:00.216617 systemd[1]: session-30.scope: Deactivated successfully. May 16 16:43:00.217380 systemd-logind[1572]: Session 30 logged out. Waiting for processes to exit. May 16 16:43:00.218837 systemd-logind[1572]: Removed session 30. May 16 16:43:02.068878 kubelet[2729]: E0516 16:43:02.068816 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:43:02.069336 kubelet[2729]: E0516 16:43:02.068965 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:43:05.069536 kubelet[2729]: E0516 16:43:05.069502 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:43:05.226710 systemd[1]: Started sshd@30-10.0.0.76:22-10.0.0.1:37392.service - OpenSSH per-connection server daemon (10.0.0.1:37392). May 16 16:43:05.273229 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 37392 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:43:05.274510 sshd-session[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:43:05.278399 systemd-logind[1572]: New session 31 of user core. May 16 16:43:05.285491 systemd[1]: Started session-31.scope - Session 31 of User core. May 16 16:43:05.385659 sshd[4469]: Connection closed by 10.0.0.1 port 37392 May 16 16:43:05.385892 sshd-session[4467]: pam_unix(sshd:session): session closed for user core May 16 16:43:05.389768 systemd[1]: sshd@30-10.0.0.76:22-10.0.0.1:37392.service: Deactivated successfully. May 16 16:43:05.391822 systemd[1]: session-31.scope: Deactivated successfully. May 16 16:43:05.392740 systemd-logind[1572]: Session 31 logged out. Waiting for processes to exit. May 16 16:43:05.394031 systemd-logind[1572]: Removed session 31. May 16 16:43:10.410215 systemd[1]: Started sshd@31-10.0.0.76:22-10.0.0.1:37398.service - OpenSSH per-connection server daemon (10.0.0.1:37398). May 16 16:43:10.460248 sshd[4483]: Accepted publickey for core from 10.0.0.1 port 37398 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:43:10.461576 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:43:10.465447 systemd-logind[1572]: New session 32 of user core. May 16 16:43:10.475489 systemd[1]: Started session-32.scope - Session 32 of User core. May 16 16:43:10.576045 sshd[4485]: Connection closed by 10.0.0.1 port 37398 May 16 16:43:10.576350 sshd-session[4483]: pam_unix(sshd:session): session closed for user core May 16 16:43:10.580307 systemd[1]: sshd@31-10.0.0.76:22-10.0.0.1:37398.service: Deactivated successfully. May 16 16:43:10.582042 systemd[1]: session-32.scope: Deactivated successfully. May 16 16:43:10.582882 systemd-logind[1572]: Session 32 logged out. Waiting for processes to exit. May 16 16:43:10.583998 systemd-logind[1572]: Removed session 32. May 16 16:43:15.588253 systemd[1]: Started sshd@32-10.0.0.76:22-10.0.0.1:41694.service - OpenSSH per-connection server daemon (10.0.0.1:41694). May 16 16:43:15.639163 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 41694 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:43:15.640474 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:43:15.645586 systemd-logind[1572]: New session 33 of user core. May 16 16:43:15.655553 systemd[1]: Started session-33.scope - Session 33 of User core. May 16 16:43:15.758588 sshd[4502]: Connection closed by 10.0.0.1 port 41694 May 16 16:43:15.758879 sshd-session[4500]: pam_unix(sshd:session): session closed for user core May 16 16:43:15.763424 systemd[1]: sshd@32-10.0.0.76:22-10.0.0.1:41694.service: Deactivated successfully. May 16 16:43:15.765471 systemd[1]: session-33.scope: Deactivated successfully. May 16 16:43:15.766166 systemd-logind[1572]: Session 33 logged out. Waiting for processes to exit. May 16 16:43:15.767360 systemd-logind[1572]: Removed session 33. May 16 16:43:20.775062 systemd[1]: Started sshd@33-10.0.0.76:22-10.0.0.1:41710.service - OpenSSH per-connection server daemon (10.0.0.1:41710). May 16 16:43:20.826657 sshd[4515]: Accepted publickey for core from 10.0.0.1 port 41710 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:43:20.827886 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:43:20.832265 systemd-logind[1572]: New session 34 of user core. May 16 16:43:20.842496 systemd[1]: Started session-34.scope - Session 34 of User core. May 16 16:43:20.947519 sshd[4517]: Connection closed by 10.0.0.1 port 41710 May 16 16:43:20.947844 sshd-session[4515]: pam_unix(sshd:session): session closed for user core May 16 16:43:20.952551 systemd[1]: sshd@33-10.0.0.76:22-10.0.0.1:41710.service: Deactivated successfully. May 16 16:43:20.955030 systemd[1]: session-34.scope: Deactivated successfully. May 16 16:43:20.956029 systemd-logind[1572]: Session 34 logged out. Waiting for processes to exit. May 16 16:43:20.957323 systemd-logind[1572]: Removed session 34. May 16 16:43:25.069852 kubelet[2729]: E0516 16:43:25.069814 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:43:25.961351 systemd[1]: Started sshd@34-10.0.0.76:22-10.0.0.1:33692.service - OpenSSH per-connection server daemon (10.0.0.1:33692). May 16 16:43:26.015112 sshd[4530]: Accepted publickey for core from 10.0.0.1 port 33692 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:43:26.016675 sshd-session[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:43:26.021394 systemd-logind[1572]: New session 35 of user core. May 16 16:43:26.028494 systemd[1]: Started session-35.scope - Session 35 of User core. May 16 16:43:26.129833 sshd[4532]: Connection closed by 10.0.0.1 port 33692 May 16 16:43:26.130099 sshd-session[4530]: pam_unix(sshd:session): session closed for user core May 16 16:43:26.134000 systemd[1]: sshd@34-10.0.0.76:22-10.0.0.1:33692.service: Deactivated successfully. May 16 16:43:26.135826 systemd[1]: session-35.scope: Deactivated successfully. May 16 16:43:26.136620 systemd-logind[1572]: Session 35 logged out. Waiting for processes to exit. May 16 16:43:26.137831 systemd-logind[1572]: Removed session 35. May 16 16:43:31.147301 systemd[1]: Started sshd@35-10.0.0.76:22-10.0.0.1:33704.service - OpenSSH per-connection server daemon (10.0.0.1:33704). May 16 16:43:31.204455 sshd[4545]: Accepted publickey for core from 10.0.0.1 port 33704 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:43:31.206174 sshd-session[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:43:31.210833 systemd-logind[1572]: New session 36 of user core. May 16 16:43:31.218529 systemd[1]: Started session-36.scope - Session 36 of User core. May 16 16:43:31.323084 sshd[4547]: Connection closed by 10.0.0.1 port 33704 May 16 16:43:31.323409 sshd-session[4545]: pam_unix(sshd:session): session closed for user core May 16 16:43:31.327222 systemd[1]: sshd@35-10.0.0.76:22-10.0.0.1:33704.service: Deactivated successfully. May 16 16:43:31.329230 systemd[1]: session-36.scope: Deactivated successfully. May 16 16:43:31.330016 systemd-logind[1572]: Session 36 logged out. Waiting for processes to exit. May 16 16:43:31.331048 systemd-logind[1572]: Removed session 36. May 16 16:43:36.335110 systemd[1]: Started sshd@36-10.0.0.76:22-10.0.0.1:33014.service - OpenSSH per-connection server daemon (10.0.0.1:33014). May 16 16:43:36.380095 sshd[4560]: Accepted publickey for core from 10.0.0.1 port 33014 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:43:36.381484 sshd-session[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:43:36.386117 systemd-logind[1572]: New session 37 of user core. May 16 16:43:36.401526 systemd[1]: Started session-37.scope - Session 37 of User core. May 16 16:43:36.507806 sshd[4562]: Connection closed by 10.0.0.1 port 33014 May 16 16:43:36.508122 sshd-session[4560]: pam_unix(sshd:session): session closed for user core May 16 16:43:36.512106 systemd[1]: sshd@36-10.0.0.76:22-10.0.0.1:33014.service: Deactivated successfully. May 16 16:43:36.514533 systemd[1]: session-37.scope: Deactivated successfully. May 16 16:43:36.515352 systemd-logind[1572]: Session 37 logged out. Waiting for processes to exit. May 16 16:43:36.516981 systemd-logind[1572]: Removed session 37. May 16 16:43:41.524779 systemd[1]: Started sshd@37-10.0.0.76:22-10.0.0.1:33024.service - OpenSSH per-connection server daemon (10.0.0.1:33024). May 16 16:43:41.569501 sshd[4575]: Accepted publickey for core from 10.0.0.1 port 33024 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:43:41.570756 sshd-session[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:43:41.574742 systemd-logind[1572]: New session 38 of user core. May 16 16:43:41.584522 systemd[1]: Started session-38.scope - Session 38 of User core. May 16 16:43:41.690209 sshd[4577]: Connection closed by 10.0.0.1 port 33024 May 16 16:43:41.690574 sshd-session[4575]: pam_unix(sshd:session): session closed for user core May 16 16:43:41.694006 systemd[1]: sshd@37-10.0.0.76:22-10.0.0.1:33024.service: Deactivated successfully. May 16 16:43:41.696167 systemd[1]: session-38.scope: Deactivated successfully. May 16 16:43:41.697733 systemd-logind[1572]: Session 38 logged out. Waiting for processes to exit. May 16 16:43:41.699175 systemd-logind[1572]: Removed session 38. May 16 16:43:43.069354 kubelet[2729]: E0516 16:43:43.069271 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:43:45.069637 kubelet[2729]: E0516 16:43:45.069604 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:43:46.707683 systemd[1]: Started sshd@38-10.0.0.76:22-10.0.0.1:57052.service - OpenSSH per-connection server daemon (10.0.0.1:57052). May 16 16:43:46.755383 sshd[4593]: Accepted publickey for core from 10.0.0.1 port 57052 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:43:46.756721 sshd-session[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:43:46.761026 systemd-logind[1572]: New session 39 of user core. May 16 16:43:46.771489 systemd[1]: Started session-39.scope - Session 39 of User core. May 16 16:43:46.881606 sshd[4595]: Connection closed by 10.0.0.1 port 57052 May 16 16:43:46.881941 sshd-session[4593]: pam_unix(sshd:session): session closed for user core May 16 16:43:46.887119 systemd[1]: sshd@38-10.0.0.76:22-10.0.0.1:57052.service: Deactivated successfully. May 16 16:43:46.889801 systemd[1]: session-39.scope: Deactivated successfully. May 16 16:43:46.890980 systemd-logind[1572]: Session 39 logged out. Waiting for processes to exit. May 16 16:43:46.892438 systemd-logind[1572]: Removed session 39. May 16 16:43:51.897232 systemd[1]: Started sshd@39-10.0.0.76:22-10.0.0.1:57062.service - OpenSSH per-connection server daemon (10.0.0.1:57062). May 16 16:43:51.943855 sshd[4608]: Accepted publickey for core from 10.0.0.1 port 57062 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:43:51.945710 sshd-session[4608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:43:51.950490 systemd-logind[1572]: New session 40 of user core. May 16 16:43:51.959504 systemd[1]: Started session-40.scope - Session 40 of User core. May 16 16:43:52.068801 sshd[4610]: Connection closed by 10.0.0.1 port 57062 May 16 16:43:52.069219 sshd-session[4608]: pam_unix(sshd:session): session closed for user core May 16 16:43:52.073640 systemd[1]: sshd@39-10.0.0.76:22-10.0.0.1:57062.service: Deactivated successfully. May 16 16:43:52.075918 systemd[1]: session-40.scope: Deactivated successfully. May 16 16:43:52.077027 systemd-logind[1572]: Session 40 logged out. Waiting for processes to exit. May 16 16:43:52.078349 systemd-logind[1572]: Removed session 40. May 16 16:43:57.085311 systemd[1]: Started sshd@40-10.0.0.76:22-10.0.0.1:44932.service - OpenSSH per-connection server daemon (10.0.0.1:44932). May 16 16:43:57.134501 sshd[4623]: Accepted publickey for core from 10.0.0.1 port 44932 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:43:57.135779 sshd-session[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:43:57.140399 systemd-logind[1572]: New session 41 of user core. May 16 16:43:57.148553 systemd[1]: Started session-41.scope - Session 41 of User core. May 16 16:43:57.259328 sshd[4625]: Connection closed by 10.0.0.1 port 44932 May 16 16:43:57.259646 sshd-session[4623]: pam_unix(sshd:session): session closed for user core May 16 16:43:57.264697 systemd[1]: sshd@40-10.0.0.76:22-10.0.0.1:44932.service: Deactivated successfully. May 16 16:43:57.266762 systemd[1]: session-41.scope: Deactivated successfully. May 16 16:43:57.267484 systemd-logind[1572]: Session 41 logged out. Waiting for processes to exit. May 16 16:43:57.268659 systemd-logind[1572]: Removed session 41. May 16 16:43:59.069089 kubelet[2729]: E0516 16:43:59.069056 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:44:02.069488 kubelet[2729]: E0516 16:44:02.069446 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:44:02.276387 systemd[1]: Started sshd@41-10.0.0.76:22-10.0.0.1:44934.service - OpenSSH per-connection server daemon (10.0.0.1:44934). May 16 16:44:02.333024 sshd[4638]: Accepted publickey for core from 10.0.0.1 port 44934 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:02.334459 sshd-session[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:02.338868 systemd-logind[1572]: New session 42 of user core. May 16 16:44:02.347556 systemd[1]: Started session-42.scope - Session 42 of User core. May 16 16:44:02.454810 sshd[4640]: Connection closed by 10.0.0.1 port 44934 May 16 16:44:02.455160 sshd-session[4638]: pam_unix(sshd:session): session closed for user core May 16 16:44:02.459600 systemd[1]: sshd@41-10.0.0.76:22-10.0.0.1:44934.service: Deactivated successfully. May 16 16:44:02.461714 systemd[1]: session-42.scope: Deactivated successfully. May 16 16:44:02.462557 systemd-logind[1572]: Session 42 logged out. Waiting for processes to exit. May 16 16:44:02.463802 systemd-logind[1572]: Removed session 42. May 16 16:44:07.471290 systemd[1]: Started sshd@42-10.0.0.76:22-10.0.0.1:53938.service - OpenSSH per-connection server daemon (10.0.0.1:53938). May 16 16:44:07.520249 sshd[4656]: Accepted publickey for core from 10.0.0.1 port 53938 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:07.521854 sshd-session[4656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:07.528046 systemd-logind[1572]: New session 43 of user core. May 16 16:44:07.532599 systemd[1]: Started session-43.scope - Session 43 of User core. May 16 16:44:07.635654 sshd[4658]: Connection closed by 10.0.0.1 port 53938 May 16 16:44:07.635952 sshd-session[4656]: pam_unix(sshd:session): session closed for user core May 16 16:44:07.638890 systemd[1]: sshd@42-10.0.0.76:22-10.0.0.1:53938.service: Deactivated successfully. May 16 16:44:07.641124 systemd[1]: session-43.scope: Deactivated successfully. May 16 16:44:07.642867 systemd-logind[1572]: Session 43 logged out. Waiting for processes to exit. May 16 16:44:07.644672 systemd-logind[1572]: Removed session 43. May 16 16:44:12.652307 systemd[1]: Started sshd@43-10.0.0.76:22-10.0.0.1:53942.service - OpenSSH per-connection server daemon (10.0.0.1:53942). May 16 16:44:12.705260 sshd[4671]: Accepted publickey for core from 10.0.0.1 port 53942 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:12.734394 sshd-session[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:12.738588 systemd-logind[1572]: New session 44 of user core. May 16 16:44:12.746533 systemd[1]: Started session-44.scope - Session 44 of User core. May 16 16:44:12.848538 sshd[4673]: Connection closed by 10.0.0.1 port 53942 May 16 16:44:12.848851 sshd-session[4671]: pam_unix(sshd:session): session closed for user core May 16 16:44:12.853286 systemd[1]: sshd@43-10.0.0.76:22-10.0.0.1:53942.service: Deactivated successfully. May 16 16:44:12.855270 systemd[1]: session-44.scope: Deactivated successfully. May 16 16:44:12.856087 systemd-logind[1572]: Session 44 logged out. Waiting for processes to exit. May 16 16:44:12.857287 systemd-logind[1572]: Removed session 44. May 16 16:44:13.069306 kubelet[2729]: E0516 16:44:13.069255 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:44:17.862124 systemd[1]: Started sshd@44-10.0.0.76:22-10.0.0.1:36368.service - OpenSSH per-connection server daemon (10.0.0.1:36368). May 16 16:44:17.918403 sshd[4688]: Accepted publickey for core from 10.0.0.1 port 36368 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:17.919936 sshd-session[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:17.924337 systemd-logind[1572]: New session 45 of user core. May 16 16:44:17.932545 systemd[1]: Started session-45.scope - Session 45 of User core. May 16 16:44:18.049852 sshd[4690]: Connection closed by 10.0.0.1 port 36368 May 16 16:44:18.050178 sshd-session[4688]: pam_unix(sshd:session): session closed for user core May 16 16:44:18.054858 systemd[1]: sshd@44-10.0.0.76:22-10.0.0.1:36368.service: Deactivated successfully. May 16 16:44:18.056963 systemd[1]: session-45.scope: Deactivated successfully. May 16 16:44:18.057939 systemd-logind[1572]: Session 45 logged out. Waiting for processes to exit. May 16 16:44:18.059255 systemd-logind[1572]: Removed session 45. May 16 16:44:19.069353 kubelet[2729]: E0516 16:44:19.069315 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:44:23.074620 systemd[1]: Started sshd@45-10.0.0.76:22-10.0.0.1:36374.service - OpenSSH per-connection server daemon (10.0.0.1:36374). May 16 16:44:23.129106 sshd[4704]: Accepted publickey for core from 10.0.0.1 port 36374 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:23.130884 sshd-session[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:23.134944 systemd-logind[1572]: New session 46 of user core. May 16 16:44:23.144496 systemd[1]: Started session-46.scope - Session 46 of User core. May 16 16:44:23.249551 sshd[4706]: Connection closed by 10.0.0.1 port 36374 May 16 16:44:23.249854 sshd-session[4704]: pam_unix(sshd:session): session closed for user core May 16 16:44:23.254020 systemd[1]: sshd@45-10.0.0.76:22-10.0.0.1:36374.service: Deactivated successfully. May 16 16:44:23.255940 systemd[1]: session-46.scope: Deactivated successfully. May 16 16:44:23.256786 systemd-logind[1572]: Session 46 logged out. Waiting for processes to exit. May 16 16:44:23.257983 systemd-logind[1572]: Removed session 46. May 16 16:44:25.070053 kubelet[2729]: E0516 16:44:25.070004 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:44:28.262562 systemd[1]: Started sshd@46-10.0.0.76:22-10.0.0.1:32942.service - OpenSSH per-connection server daemon (10.0.0.1:32942). May 16 16:44:28.317851 sshd[4719]: Accepted publickey for core from 10.0.0.1 port 32942 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:28.319729 sshd-session[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:28.324261 systemd-logind[1572]: New session 47 of user core. May 16 16:44:28.334539 systemd[1]: Started session-47.scope - Session 47 of User core. May 16 16:44:28.443477 sshd[4721]: Connection closed by 10.0.0.1 port 32942 May 16 16:44:28.443805 sshd-session[4719]: pam_unix(sshd:session): session closed for user core May 16 16:44:28.447789 systemd[1]: sshd@46-10.0.0.76:22-10.0.0.1:32942.service: Deactivated successfully. May 16 16:44:28.449947 systemd[1]: session-47.scope: Deactivated successfully. May 16 16:44:28.450982 systemd-logind[1572]: Session 47 logged out. Waiting for processes to exit. May 16 16:44:28.452199 systemd-logind[1572]: Removed session 47. May 16 16:44:33.459903 systemd[1]: Started sshd@47-10.0.0.76:22-10.0.0.1:32956.service - OpenSSH per-connection server daemon (10.0.0.1:32956). May 16 16:44:33.503442 sshd[4735]: Accepted publickey for core from 10.0.0.1 port 32956 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:33.504721 sshd-session[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:33.508697 systemd-logind[1572]: New session 48 of user core. May 16 16:44:33.516501 systemd[1]: Started session-48.scope - Session 48 of User core. May 16 16:44:33.622582 sshd[4737]: Connection closed by 10.0.0.1 port 32956 May 16 16:44:33.622880 sshd-session[4735]: pam_unix(sshd:session): session closed for user core May 16 16:44:33.627674 systemd[1]: sshd@47-10.0.0.76:22-10.0.0.1:32956.service: Deactivated successfully. May 16 16:44:33.630228 systemd[1]: session-48.scope: Deactivated successfully. May 16 16:44:33.631092 systemd-logind[1572]: Session 48 logged out. Waiting for processes to exit. May 16 16:44:33.632914 systemd-logind[1572]: Removed session 48. May 16 16:44:34.069084 kubelet[2729]: E0516 16:44:34.069041 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:44:38.640518 systemd[1]: Started sshd@48-10.0.0.76:22-10.0.0.1:52238.service - OpenSSH per-connection server daemon (10.0.0.1:52238). May 16 16:44:38.694928 sshd[4750]: Accepted publickey for core from 10.0.0.1 port 52238 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:38.696322 sshd-session[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:38.700412 systemd-logind[1572]: New session 49 of user core. May 16 16:44:38.710505 systemd[1]: Started session-49.scope - Session 49 of User core. May 16 16:44:38.814845 sshd[4752]: Connection closed by 10.0.0.1 port 52238 May 16 16:44:38.815184 sshd-session[4750]: pam_unix(sshd:session): session closed for user core May 16 16:44:38.819813 systemd[1]: sshd@48-10.0.0.76:22-10.0.0.1:52238.service: Deactivated successfully. May 16 16:44:38.821958 systemd[1]: session-49.scope: Deactivated successfully. May 16 16:44:38.823134 systemd-logind[1572]: Session 49 logged out. Waiting for processes to exit. May 16 16:44:38.824864 systemd-logind[1572]: Removed session 49. May 16 16:44:43.826856 systemd[1]: Started sshd@49-10.0.0.76:22-10.0.0.1:55040.service - OpenSSH per-connection server daemon (10.0.0.1:55040). May 16 16:44:43.870823 sshd[4768]: Accepted publickey for core from 10.0.0.1 port 55040 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:43.872164 sshd-session[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:43.876901 systemd-logind[1572]: New session 50 of user core. May 16 16:44:43.884494 systemd[1]: Started session-50.scope - Session 50 of User core. May 16 16:44:43.994414 sshd[4770]: Connection closed by 10.0.0.1 port 55040 May 16 16:44:43.994751 sshd-session[4768]: pam_unix(sshd:session): session closed for user core May 16 16:44:43.999584 systemd[1]: sshd@49-10.0.0.76:22-10.0.0.1:55040.service: Deactivated successfully. May 16 16:44:44.001969 systemd[1]: session-50.scope: Deactivated successfully. May 16 16:44:44.003096 systemd-logind[1572]: Session 50 logged out. Waiting for processes to exit. May 16 16:44:44.005034 systemd-logind[1572]: Removed session 50. May 16 16:44:49.012132 systemd[1]: Started sshd@50-10.0.0.76:22-10.0.0.1:55052.service - OpenSSH per-connection server daemon (10.0.0.1:55052). May 16 16:44:49.059201 sshd[4783]: Accepted publickey for core from 10.0.0.1 port 55052 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:49.060622 sshd-session[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:49.064826 systemd-logind[1572]: New session 51 of user core. May 16 16:44:49.075533 systemd[1]: Started session-51.scope - Session 51 of User core. May 16 16:44:49.176995 sshd[4785]: Connection closed by 10.0.0.1 port 55052 May 16 16:44:49.177287 sshd-session[4783]: pam_unix(sshd:session): session closed for user core May 16 16:44:49.181235 systemd[1]: sshd@50-10.0.0.76:22-10.0.0.1:55052.service: Deactivated successfully. May 16 16:44:49.183353 systemd[1]: session-51.scope: Deactivated successfully. May 16 16:44:49.184123 systemd-logind[1572]: Session 51 logged out. Waiting for processes to exit. May 16 16:44:49.185326 systemd-logind[1572]: Removed session 51. May 16 16:44:54.198195 systemd[1]: Started sshd@51-10.0.0.76:22-10.0.0.1:54126.service - OpenSSH per-connection server daemon (10.0.0.1:54126). May 16 16:44:54.244921 sshd[4799]: Accepted publickey for core from 10.0.0.1 port 54126 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:54.246354 sshd-session[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:54.250443 systemd-logind[1572]: New session 52 of user core. May 16 16:44:54.258490 systemd[1]: Started session-52.scope - Session 52 of User core. May 16 16:44:54.363148 sshd[4801]: Connection closed by 10.0.0.1 port 54126 May 16 16:44:54.363449 sshd-session[4799]: pam_unix(sshd:session): session closed for user core May 16 16:44:54.367625 systemd[1]: sshd@51-10.0.0.76:22-10.0.0.1:54126.service: Deactivated successfully. May 16 16:44:54.369602 systemd[1]: session-52.scope: Deactivated successfully. May 16 16:44:54.370333 systemd-logind[1572]: Session 52 logged out. Waiting for processes to exit. May 16 16:44:54.371464 systemd-logind[1572]: Removed session 52. May 16 16:44:59.380174 systemd[1]: Started sshd@52-10.0.0.76:22-10.0.0.1:54136.service - OpenSSH per-connection server daemon (10.0.0.1:54136). May 16 16:44:59.430780 sshd[4814]: Accepted publickey for core from 10.0.0.1 port 54136 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:59.432235 sshd-session[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:59.436477 systemd-logind[1572]: New session 53 of user core. May 16 16:44:59.447491 systemd[1]: Started session-53.scope - Session 53 of User core. May 16 16:44:59.550979 sshd[4816]: Connection closed by 10.0.0.1 port 54136 May 16 16:44:59.551278 sshd-session[4814]: pam_unix(sshd:session): session closed for user core May 16 16:44:59.555714 systemd[1]: sshd@52-10.0.0.76:22-10.0.0.1:54136.service: Deactivated successfully. May 16 16:44:59.557801 systemd[1]: session-53.scope: Deactivated successfully. May 16 16:44:59.558541 systemd-logind[1572]: Session 53 logged out. Waiting for processes to exit. May 16 16:44:59.559655 systemd-logind[1572]: Removed session 53. May 16 16:45:00.069504 kubelet[2729]: E0516 16:45:00.069450 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:04.567108 systemd[1]: Started sshd@53-10.0.0.76:22-10.0.0.1:57630.service - OpenSSH per-connection server daemon (10.0.0.1:57630). May 16 16:45:04.619849 sshd[4830]: Accepted publickey for core from 10.0.0.1 port 57630 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:45:04.621248 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:45:04.625231 systemd-logind[1572]: New session 54 of user core. May 16 16:45:04.634527 systemd[1]: Started session-54.scope - Session 54 of User core. May 16 16:45:04.736784 sshd[4832]: Connection closed by 10.0.0.1 port 57630 May 16 16:45:04.737075 sshd-session[4830]: pam_unix(sshd:session): session closed for user core May 16 16:45:04.740857 systemd[1]: sshd@53-10.0.0.76:22-10.0.0.1:57630.service: Deactivated successfully. May 16 16:45:04.742621 systemd[1]: session-54.scope: Deactivated successfully. May 16 16:45:04.743562 systemd-logind[1572]: Session 54 logged out. Waiting for processes to exit. May 16 16:45:04.744758 systemd-logind[1572]: Removed session 54. May 16 16:45:07.069195 kubelet[2729]: E0516 16:45:07.069126 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:09.753910 systemd[1]: Started sshd@54-10.0.0.76:22-10.0.0.1:57642.service - OpenSSH per-connection server daemon (10.0.0.1:57642). May 16 16:45:09.807837 sshd[4848]: Accepted publickey for core from 10.0.0.1 port 57642 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:45:09.810333 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:45:09.816334 systemd-logind[1572]: New session 55 of user core. May 16 16:45:09.824534 systemd[1]: Started session-55.scope - Session 55 of User core. May 16 16:45:09.931275 sshd[4850]: Connection closed by 10.0.0.1 port 57642 May 16 16:45:09.931610 sshd-session[4848]: pam_unix(sshd:session): session closed for user core May 16 16:45:09.947612 systemd[1]: sshd@54-10.0.0.76:22-10.0.0.1:57642.service: Deactivated successfully. May 16 16:45:09.949832 systemd[1]: session-55.scope: Deactivated successfully. May 16 16:45:09.950746 systemd-logind[1572]: Session 55 logged out. Waiting for processes to exit. May 16 16:45:09.954586 systemd[1]: Started sshd@55-10.0.0.76:22-10.0.0.1:57652.service - OpenSSH per-connection server daemon (10.0.0.1:57652). May 16 16:45:09.955412 systemd-logind[1572]: Removed session 55. May 16 16:45:10.008025 sshd[4864]: Accepted publickey for core from 10.0.0.1 port 57652 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:45:10.009537 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:45:10.014235 systemd-logind[1572]: New session 56 of user core. May 16 16:45:10.020497 systemd[1]: Started session-56.scope - Session 56 of User core. May 16 16:45:11.826308 containerd[1594]: time="2025-05-16T16:45:11.826246800Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 16:45:11.835781 containerd[1594]: time="2025-05-16T16:45:11.835748196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" id:\"06bec7f5c4b081c2ef6e25ff519995a334ebc84f04dfee9505a921cdb76fa704\" pid:4886 exited_at:{seconds:1747413911 nanos:835426803}" May 16 16:45:11.838040 containerd[1594]: time="2025-05-16T16:45:11.838009052Z" level=info msg="StopContainer for \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" with timeout 2 (s)" May 16 16:45:11.845754 containerd[1594]: time="2025-05-16T16:45:11.845709647Z" level=info msg="Stop container \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" with signal terminated" May 16 16:45:11.853914 systemd-networkd[1497]: lxc_health: Link DOWN May 16 16:45:11.853921 systemd-networkd[1497]: lxc_health: Lost carrier May 16 16:45:11.862182 containerd[1594]: time="2025-05-16T16:45:11.862136082Z" level=info msg="StopContainer for \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\" with timeout 30 (s)" May 16 16:45:11.862661 containerd[1594]: time="2025-05-16T16:45:11.862639812Z" level=info msg="Stop container \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\" with signal terminated" May 16 16:45:11.871791 systemd[1]: cri-containerd-18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af.scope: Deactivated successfully. May 16 16:45:11.872575 systemd[1]: cri-containerd-18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af.scope: Consumed 6.812s CPU time, 126.1M memory peak, 232K read from disk, 13.3M written to disk. May 16 16:45:11.872885 containerd[1594]: time="2025-05-16T16:45:11.872843615Z" level=info msg="received exit event container_id:\"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" id:\"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" pid:3395 exited_at:{seconds:1747413911 nanos:872646299}" May 16 16:45:11.873031 containerd[1594]: time="2025-05-16T16:45:11.873002568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" id:\"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" pid:3395 exited_at:{seconds:1747413911 nanos:872646299}" May 16 16:45:11.875611 systemd[1]: cri-containerd-3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39.scope: Deactivated successfully. May 16 16:45:11.877636 containerd[1594]: time="2025-05-16T16:45:11.877583612Z" level=info msg="received exit event container_id:\"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\" id:\"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\" pid:3279 exited_at:{seconds:1747413911 nanos:877197577}" May 16 16:45:11.877979 containerd[1594]: time="2025-05-16T16:45:11.877950181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\" id:\"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\" pid:3279 exited_at:{seconds:1747413911 nanos:877197577}" May 16 16:45:11.898242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af-rootfs.mount: Deactivated successfully. May 16 16:45:11.904109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39-rootfs.mount: Deactivated successfully. May 16 16:45:12.054158 containerd[1594]: time="2025-05-16T16:45:12.054101351Z" level=info msg="StopContainer for \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" returns successfully" May 16 16:45:12.054895 containerd[1594]: time="2025-05-16T16:45:12.054852602Z" level=info msg="StopPodSandbox for \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\"" May 16 16:45:12.055033 containerd[1594]: time="2025-05-16T16:45:12.054942754Z" level=info msg="Container to stop \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:45:12.055033 containerd[1594]: time="2025-05-16T16:45:12.054960348Z" level=info msg="Container to stop \"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:45:12.055033 containerd[1594]: time="2025-05-16T16:45:12.054971098Z" level=info msg="Container to stop \"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:45:12.055033 containerd[1594]: time="2025-05-16T16:45:12.054982219Z" level=info msg="Container to stop \"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:45:12.055033 containerd[1594]: time="2025-05-16T16:45:12.054992118Z" level=info msg="Container to stop \"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:45:12.062207 systemd[1]: cri-containerd-73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93.scope: Deactivated successfully. May 16 16:45:12.063725 containerd[1594]: time="2025-05-16T16:45:12.063687205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" id:\"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" pid:2871 exit_status:137 exited_at:{seconds:1747413912 nanos:63134012}" May 16 16:45:12.064775 containerd[1594]: time="2025-05-16T16:45:12.064745390Z" level=info msg="StopContainer for \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\" returns successfully" May 16 16:45:12.065339 containerd[1594]: time="2025-05-16T16:45:12.065294175Z" level=info msg="StopPodSandbox for \"927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4\"" May 16 16:45:12.065437 containerd[1594]: time="2025-05-16T16:45:12.065416759Z" level=info msg="Container to stop \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:45:12.073611 systemd[1]: cri-containerd-927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4.scope: Deactivated successfully. May 16 16:45:12.093221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93-rootfs.mount: Deactivated successfully. May 16 16:45:12.098608 containerd[1594]: time="2025-05-16T16:45:12.098567357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4\" id:\"927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4\" pid:2881 exit_status:137 exited_at:{seconds:1747413912 nanos:74677463}" May 16 16:45:12.100399 containerd[1594]: time="2025-05-16T16:45:12.098891944Z" level=info msg="TearDown network for sandbox \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" successfully" May 16 16:45:12.100399 containerd[1594]: time="2025-05-16T16:45:12.098909527Z" level=info msg="StopPodSandbox for \"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" returns successfully" May 16 16:45:12.100399 containerd[1594]: time="2025-05-16T16:45:12.100167773Z" level=info msg="shim disconnected" id=927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4 namespace=k8s.io May 16 16:45:12.100399 containerd[1594]: time="2025-05-16T16:45:12.100181610Z" level=warning msg="cleaning up after shim disconnected" id=927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4 namespace=k8s.io May 16 16:45:12.101171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4-rootfs.mount: Deactivated successfully. May 16 16:45:12.104715 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4-shm.mount: Deactivated successfully. May 16 16:45:12.104926 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93-shm.mount: Deactivated successfully. May 16 16:45:12.134301 containerd[1594]: time="2025-05-16T16:45:12.100188603Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 16:45:12.134511 containerd[1594]: time="2025-05-16T16:45:12.101653663Z" level=info msg="TearDown network for sandbox \"927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4\" successfully" May 16 16:45:12.134511 containerd[1594]: time="2025-05-16T16:45:12.134345948Z" level=info msg="StopPodSandbox for \"927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4\" returns successfully" May 16 16:45:12.134511 containerd[1594]: time="2025-05-16T16:45:12.101995475Z" level=info msg="shim disconnected" id=73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93 namespace=k8s.io May 16 16:45:12.134591 containerd[1594]: time="2025-05-16T16:45:12.108285743Z" level=info msg="received exit event sandbox_id:\"73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93\" exit_status:137 exited_at:{seconds:1747413912 nanos:63134012}" May 16 16:45:12.134591 containerd[1594]: time="2025-05-16T16:45:12.134523937Z" level=warning msg="cleaning up after shim disconnected" id=73db69a3e424b6cff8f38af969a715b266765c03eac4f40ce4030d4b8da7cc93 namespace=k8s.io May 16 16:45:12.134591 containerd[1594]: time="2025-05-16T16:45:12.134537272Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 16:45:12.134889 containerd[1594]: time="2025-05-16T16:45:12.108326701Z" level=info msg="received exit event sandbox_id:\"927e2f36bdb91a9dfe352feb815ca510fdd92ec2ed4a323d3ebae8d99229c0f4\" exit_status:137 exited_at:{seconds:1747413912 nanos:74677463}" May 16 16:45:12.141155 kubelet[2729]: I0516 16:45:12.141121 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-lib-modules\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.141799 kubelet[2729]: I0516 16:45:12.141698 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-cilium-cgroup\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.141799 kubelet[2729]: I0516 16:45:12.141720 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-host-proc-sys-net\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.141799 kubelet[2729]: I0516 16:45:12.141259 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:45:12.141799 kubelet[2729]: I0516 16:45:12.141759 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:45:12.141799 kubelet[2729]: I0516 16:45:12.141734 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-bpf-maps\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.141931 kubelet[2729]: I0516 16:45:12.141768 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:45:12.141931 kubelet[2729]: I0516 16:45:12.141782 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:45:12.141931 kubelet[2729]: I0516 16:45:12.141820 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dcce061b-d8de-4286-998a-b00bc4f7fefd-hubble-tls\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.142155 kubelet[2729]: I0516 16:45:12.142112 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-cilium-run\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.142185 kubelet[2729]: I0516 16:45:12.142153 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dcce061b-d8de-4286-998a-b00bc4f7fefd-clustermesh-secrets\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.142185 kubelet[2729]: I0516 16:45:12.142176 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-xtables-lock\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.142235 kubelet[2729]: I0516 16:45:12.142196 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-cni-path\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.142235 kubelet[2729]: I0516 16:45:12.142201 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:45:12.142235 kubelet[2729]: I0516 16:45:12.142217 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-hostproc\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.142300 kubelet[2729]: I0516 16:45:12.142243 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldtff\" (UniqueName: \"kubernetes.io/projected/dcce061b-d8de-4286-998a-b00bc4f7fefd-kube-api-access-ldtff\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.142300 kubelet[2729]: I0516 16:45:12.142251 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:45:12.142300 kubelet[2729]: I0516 16:45:12.142264 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-etc-cni-netd\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.142300 kubelet[2729]: I0516 16:45:12.142288 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-host-proc-sys-kernel\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.142423 kubelet[2729]: I0516 16:45:12.142321 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcce061b-d8de-4286-998a-b00bc4f7fefd-cilium-config-path\") pod \"dcce061b-d8de-4286-998a-b00bc4f7fefd\" (UID: \"dcce061b-d8de-4286-998a-b00bc4f7fefd\") " May 16 16:45:12.142423 kubelet[2729]: I0516 16:45:12.142363 2729 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.142423 kubelet[2729]: I0516 16:45:12.142411 2729 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.142423 kubelet[2729]: I0516 16:45:12.142422 2729 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.142520 kubelet[2729]: I0516 16:45:12.142433 2729 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.142520 kubelet[2729]: I0516 16:45:12.142446 2729 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.142520 kubelet[2729]: I0516 16:45:12.142457 2729 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.146076 kubelet[2729]: I0516 16:45:12.146008 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcce061b-d8de-4286-998a-b00bc4f7fefd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 16:45:12.146341 kubelet[2729]: I0516 16:45:12.146253 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-cni-path" (OuterVolumeSpecName: "cni-path") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:45:12.146341 kubelet[2729]: I0516 16:45:12.146279 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-hostproc" (OuterVolumeSpecName: "hostproc") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:45:12.146341 kubelet[2729]: I0516 16:45:12.146293 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:45:12.146341 kubelet[2729]: I0516 16:45:12.146307 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 16:45:12.149156 kubelet[2729]: I0516 16:45:12.149077 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcce061b-d8de-4286-998a-b00bc4f7fefd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 16:45:12.150056 kubelet[2729]: I0516 16:45:12.150024 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcce061b-d8de-4286-998a-b00bc4f7fefd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 16:45:12.150246 kubelet[2729]: I0516 16:45:12.150224 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcce061b-d8de-4286-998a-b00bc4f7fefd-kube-api-access-ldtff" (OuterVolumeSpecName: "kube-api-access-ldtff") pod "dcce061b-d8de-4286-998a-b00bc4f7fefd" (UID: "dcce061b-d8de-4286-998a-b00bc4f7fefd"). InnerVolumeSpecName "kube-api-access-ldtff". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 16:45:12.242762 kubelet[2729]: I0516 16:45:12.242706 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af50d0bd-ef28-4385-9f84-f0924ae94701-cilium-config-path\") pod \"af50d0bd-ef28-4385-9f84-f0924ae94701\" (UID: \"af50d0bd-ef28-4385-9f84-f0924ae94701\") " May 16 16:45:12.242762 kubelet[2729]: I0516 16:45:12.242772 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzkbc\" (UniqueName: \"kubernetes.io/projected/af50d0bd-ef28-4385-9f84-f0924ae94701-kube-api-access-fzkbc\") pod \"af50d0bd-ef28-4385-9f84-f0924ae94701\" (UID: \"af50d0bd-ef28-4385-9f84-f0924ae94701\") " May 16 16:45:12.242975 kubelet[2729]: I0516 16:45:12.242809 2729 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.242975 kubelet[2729]: I0516 16:45:12.242837 2729 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.242975 kubelet[2729]: I0516 16:45:12.242849 2729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldtff\" (UniqueName: \"kubernetes.io/projected/dcce061b-d8de-4286-998a-b00bc4f7fefd-kube-api-access-ldtff\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.242975 kubelet[2729]: I0516 16:45:12.242860 2729 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.242975 kubelet[2729]: I0516 16:45:12.242870 2729 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dcce061b-d8de-4286-998a-b00bc4f7fefd-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.242975 kubelet[2729]: I0516 16:45:12.242881 2729 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcce061b-d8de-4286-998a-b00bc4f7fefd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.242975 kubelet[2729]: I0516 16:45:12.242891 2729 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dcce061b-d8de-4286-998a-b00bc4f7fefd-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.242975 kubelet[2729]: I0516 16:45:12.242899 2729 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dcce061b-d8de-4286-998a-b00bc4f7fefd-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.245792 kubelet[2729]: I0516 16:45:12.245753 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af50d0bd-ef28-4385-9f84-f0924ae94701-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "af50d0bd-ef28-4385-9f84-f0924ae94701" (UID: "af50d0bd-ef28-4385-9f84-f0924ae94701"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 16:45:12.246186 kubelet[2729]: I0516 16:45:12.246132 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af50d0bd-ef28-4385-9f84-f0924ae94701-kube-api-access-fzkbc" (OuterVolumeSpecName: "kube-api-access-fzkbc") pod "af50d0bd-ef28-4385-9f84-f0924ae94701" (UID: "af50d0bd-ef28-4385-9f84-f0924ae94701"). InnerVolumeSpecName "kube-api-access-fzkbc". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 16:45:12.344046 kubelet[2729]: I0516 16:45:12.343947 2729 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af50d0bd-ef28-4385-9f84-f0924ae94701-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.344046 kubelet[2729]: I0516 16:45:12.343998 2729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzkbc\" (UniqueName: \"kubernetes.io/projected/af50d0bd-ef28-4385-9f84-f0924ae94701-kube-api-access-fzkbc\") on node \"localhost\" DevicePath \"\"" May 16 16:45:12.794967 kubelet[2729]: I0516 16:45:12.794876 2729 scope.go:117] "RemoveContainer" containerID="3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39" May 16 16:45:12.797729 containerd[1594]: time="2025-05-16T16:45:12.797646774Z" level=info msg="RemoveContainer for \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\"" May 16 16:45:12.802159 systemd[1]: Removed slice kubepods-besteffort-podaf50d0bd_ef28_4385_9f84_f0924ae94701.slice - libcontainer container kubepods-besteffort-podaf50d0bd_ef28_4385_9f84_f0924ae94701.slice. May 16 16:45:12.807587 systemd[1]: Removed slice kubepods-burstable-poddcce061b_d8de_4286_998a_b00bc4f7fefd.slice - libcontainer container kubepods-burstable-poddcce061b_d8de_4286_998a_b00bc4f7fefd.slice. May 16 16:45:12.807784 systemd[1]: kubepods-burstable-poddcce061b_d8de_4286_998a_b00bc4f7fefd.slice: Consumed 6.922s CPU time, 126.5M memory peak, 244K read from disk, 16.6M written to disk. May 16 16:45:12.897878 systemd[1]: var-lib-kubelet-pods-af50d0bd\x2def28\x2d4385\x2d9f84\x2df0924ae94701-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfzkbc.mount: Deactivated successfully. May 16 16:45:12.898019 systemd[1]: var-lib-kubelet-pods-dcce061b\x2dd8de\x2d4286\x2d998a\x2db00bc4f7fefd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dldtff.mount: Deactivated successfully. May 16 16:45:12.898121 systemd[1]: var-lib-kubelet-pods-dcce061b\x2dd8de\x2d4286\x2d998a\x2db00bc4f7fefd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 16:45:12.898215 systemd[1]: var-lib-kubelet-pods-dcce061b\x2dd8de\x2d4286\x2d998a\x2db00bc4f7fefd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 16:45:12.932842 containerd[1594]: time="2025-05-16T16:45:12.932794531Z" level=info msg="RemoveContainer for \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\" returns successfully" May 16 16:45:12.933193 kubelet[2729]: I0516 16:45:12.933152 2729 scope.go:117] "RemoveContainer" containerID="3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39" May 16 16:45:12.933568 containerd[1594]: time="2025-05-16T16:45:12.933511837Z" level=error msg="ContainerStatus for \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\": not found" May 16 16:45:12.938837 kubelet[2729]: E0516 16:45:12.938793 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\": not found" containerID="3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39" May 16 16:45:12.939011 kubelet[2729]: I0516 16:45:12.938845 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39"} err="failed to get container status \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\": rpc error: code = NotFound desc = an error occurred when try to find container \"3961070732b457671cfdc4c7398920666e527ff28b60e7a0e6e244b940db2a39\": not found" May 16 16:45:12.939011 kubelet[2729]: I0516 16:45:12.938932 2729 scope.go:117] "RemoveContainer" containerID="18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af" May 16 16:45:12.940683 containerd[1594]: time="2025-05-16T16:45:12.940633378Z" level=info msg="RemoveContainer for \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\"" May 16 16:45:13.064000 containerd[1594]: time="2025-05-16T16:45:13.063851892Z" level=info msg="RemoveContainer for \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" returns successfully" May 16 16:45:13.064160 kubelet[2729]: I0516 16:45:13.064070 2729 scope.go:117] "RemoveContainer" containerID="4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6" May 16 16:45:13.065741 containerd[1594]: time="2025-05-16T16:45:13.065698277Z" level=info msg="RemoveContainer for \"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\"" May 16 16:45:13.071261 kubelet[2729]: I0516 16:45:13.071225 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af50d0bd-ef28-4385-9f84-f0924ae94701" path="/var/lib/kubelet/pods/af50d0bd-ef28-4385-9f84-f0924ae94701/volumes" May 16 16:45:13.086928 containerd[1594]: time="2025-05-16T16:45:13.086862353Z" level=info msg="RemoveContainer for \"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\" returns successfully" May 16 16:45:13.087237 kubelet[2729]: I0516 16:45:13.087183 2729 scope.go:117] "RemoveContainer" containerID="8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989" May 16 16:45:13.090779 containerd[1594]: time="2025-05-16T16:45:13.090740627Z" level=info msg="RemoveContainer for \"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\"" May 16 16:45:13.096045 containerd[1594]: time="2025-05-16T16:45:13.096023305Z" level=info msg="RemoveContainer for \"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\" returns successfully" May 16 16:45:13.096263 kubelet[2729]: I0516 16:45:13.096233 2729 scope.go:117] "RemoveContainer" containerID="9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04" May 16 16:45:13.097574 containerd[1594]: time="2025-05-16T16:45:13.097547106Z" level=info msg="RemoveContainer for \"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\"" May 16 16:45:13.101475 containerd[1594]: time="2025-05-16T16:45:13.101439367Z" level=info msg="RemoveContainer for \"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\" returns successfully" May 16 16:45:13.101720 kubelet[2729]: I0516 16:45:13.101660 2729 scope.go:117] "RemoveContainer" containerID="647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39" May 16 16:45:13.103019 containerd[1594]: time="2025-05-16T16:45:13.102979159Z" level=info msg="RemoveContainer for \"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\"" May 16 16:45:13.106819 containerd[1594]: time="2025-05-16T16:45:13.106771380Z" level=info msg="RemoveContainer for \"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\" returns successfully" May 16 16:45:13.107046 kubelet[2729]: I0516 16:45:13.107006 2729 scope.go:117] "RemoveContainer" containerID="18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af" May 16 16:45:13.107337 containerd[1594]: time="2025-05-16T16:45:13.107287763Z" level=error msg="ContainerStatus for \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\": not found" May 16 16:45:13.107546 kubelet[2729]: E0516 16:45:13.107507 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\": not found" containerID="18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af" May 16 16:45:13.107611 kubelet[2729]: I0516 16:45:13.107545 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af"} err="failed to get container status \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\": rpc error: code = NotFound desc = an error occurred when try to find container \"18cce63ab1b37a3e4284f94ac15d3e13e3614584101a6f8e614cad45312ad7af\": not found" May 16 16:45:13.107611 kubelet[2729]: I0516 16:45:13.107570 2729 scope.go:117] "RemoveContainer" containerID="4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6" May 16 16:45:13.107790 containerd[1594]: time="2025-05-16T16:45:13.107756304Z" level=error msg="ContainerStatus for \"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\": not found" May 16 16:45:13.107915 kubelet[2729]: E0516 16:45:13.107891 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\": not found" containerID="4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6" May 16 16:45:13.107985 kubelet[2729]: I0516 16:45:13.107913 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6"} err="failed to get container status \"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b99a8ac1eadd19284199b182806203d3a8c34ccf215f8a37a2505c3b3eafec6\": not found" May 16 16:45:13.107985 kubelet[2729]: I0516 16:45:13.107927 2729 scope.go:117] "RemoveContainer" containerID="8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989" May 16 16:45:13.108289 containerd[1594]: time="2025-05-16T16:45:13.108138182Z" level=error msg="ContainerStatus for \"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\": not found" May 16 16:45:13.108353 kubelet[2729]: E0516 16:45:13.108310 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\": not found" containerID="8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989" May 16 16:45:13.108421 kubelet[2729]: I0516 16:45:13.108347 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989"} err="failed to get container status \"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a74fe708b4d2bf943ca415d7a37047f29939db825cc709f2223adab9e7ff989\": not found" May 16 16:45:13.108421 kubelet[2729]: I0516 16:45:13.108366 2729 scope.go:117] "RemoveContainer" containerID="9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04" May 16 16:45:13.108565 containerd[1594]: time="2025-05-16T16:45:13.108532733Z" level=error msg="ContainerStatus for \"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\": not found" May 16 16:45:13.108699 kubelet[2729]: E0516 16:45:13.108672 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\": not found" containerID="9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04" May 16 16:45:13.108747 kubelet[2729]: I0516 16:45:13.108697 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04"} err="failed to get container status \"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\": rpc error: code = NotFound desc = an error occurred when try to find container \"9280ae5f9bd2845a654f63821de3318fc9e593fed4c3e4329e926edacf7cdb04\": not found" May 16 16:45:13.108747 kubelet[2729]: I0516 16:45:13.108713 2729 scope.go:117] "RemoveContainer" containerID="647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39" May 16 16:45:13.108923 containerd[1594]: time="2025-05-16T16:45:13.108888962Z" level=error msg="ContainerStatus for \"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\": not found" May 16 16:45:13.109076 kubelet[2729]: E0516 16:45:13.109049 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\": not found" containerID="647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39" May 16 16:45:13.109109 kubelet[2729]: I0516 16:45:13.109088 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39"} err="failed to get container status \"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\": rpc error: code = NotFound desc = an error occurred when try to find container \"647c72be3284d840da8ce5d525c9a739d6a05405bd1114315ca07dd343320f39\": not found" May 16 16:45:13.412976 sshd[4866]: Connection closed by 10.0.0.1 port 57652 May 16 16:45:13.413805 sshd-session[4864]: pam_unix(sshd:session): session closed for user core May 16 16:45:13.425856 systemd[1]: sshd@55-10.0.0.76:22-10.0.0.1:57652.service: Deactivated successfully. May 16 16:45:13.427576 systemd[1]: session-56.scope: Deactivated successfully. May 16 16:45:13.428403 systemd-logind[1572]: Session 56 logged out. Waiting for processes to exit. May 16 16:45:13.430970 systemd[1]: Started sshd@56-10.0.0.76:22-10.0.0.1:57654.service - OpenSSH per-connection server daemon (10.0.0.1:57654). May 16 16:45:13.431958 systemd-logind[1572]: Removed session 56. May 16 16:45:13.488926 sshd[5019]: Accepted publickey for core from 10.0.0.1 port 57654 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:45:13.490279 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:45:13.494539 systemd-logind[1572]: New session 57 of user core. May 16 16:45:13.506498 systemd[1]: Started session-57.scope - Session 57 of User core. May 16 16:45:14.068892 kubelet[2729]: E0516 16:45:14.068838 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:14.140914 sshd[5021]: Connection closed by 10.0.0.1 port 57654 May 16 16:45:14.141328 sshd-session[5019]: pam_unix(sshd:session): session closed for user core May 16 16:45:14.151407 systemd[1]: sshd@56-10.0.0.76:22-10.0.0.1:57654.service: Deactivated successfully. May 16 16:45:14.156458 systemd[1]: session-57.scope: Deactivated successfully. May 16 16:45:14.157994 systemd-logind[1572]: Session 57 logged out. Waiting for processes to exit. May 16 16:45:14.165352 kubelet[2729]: E0516 16:45:14.165301 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcce061b-d8de-4286-998a-b00bc4f7fefd" containerName="mount-bpf-fs" May 16 16:45:14.165352 kubelet[2729]: E0516 16:45:14.165341 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcce061b-d8de-4286-998a-b00bc4f7fefd" containerName="clean-cilium-state" May 16 16:45:14.165352 kubelet[2729]: E0516 16:45:14.165348 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcce061b-d8de-4286-998a-b00bc4f7fefd" containerName="cilium-agent" May 16 16:45:14.165352 kubelet[2729]: E0516 16:45:14.165354 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcce061b-d8de-4286-998a-b00bc4f7fefd" containerName="mount-cgroup" May 16 16:45:14.165352 kubelet[2729]: E0516 16:45:14.165360 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcce061b-d8de-4286-998a-b00bc4f7fefd" containerName="apply-sysctl-overwrites" May 16 16:45:14.165863 kubelet[2729]: E0516 16:45:14.165366 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af50d0bd-ef28-4385-9f84-f0924ae94701" containerName="cilium-operator" May 16 16:45:14.165930 kubelet[2729]: I0516 16:45:14.165865 2729 memory_manager.go:354] "RemoveStaleState removing state" podUID="af50d0bd-ef28-4385-9f84-f0924ae94701" containerName="cilium-operator" May 16 16:45:14.165930 kubelet[2729]: I0516 16:45:14.165874 2729 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcce061b-d8de-4286-998a-b00bc4f7fefd" containerName="cilium-agent" May 16 16:45:14.167277 systemd[1]: Started sshd@57-10.0.0.76:22-10.0.0.1:58492.service - OpenSSH per-connection server daemon (10.0.0.1:58492). May 16 16:45:14.169862 systemd-logind[1572]: Removed session 57. May 16 16:45:14.187036 systemd[1]: Created slice kubepods-burstable-pod622051f7_eef1_4938_b946_0218a3e6b70c.slice - libcontainer container kubepods-burstable-pod622051f7_eef1_4938_b946_0218a3e6b70c.slice. May 16 16:45:14.217589 sshd[5033]: Accepted publickey for core from 10.0.0.1 port 58492 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:45:14.218944 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:45:14.223240 systemd-logind[1572]: New session 58 of user core. May 16 16:45:14.234627 systemd[1]: Started session-58.scope - Session 58 of User core. May 16 16:45:14.286546 sshd[5035]: Connection closed by 10.0.0.1 port 58492 May 16 16:45:14.286838 sshd-session[5033]: pam_unix(sshd:session): session closed for user core May 16 16:45:14.303964 systemd[1]: sshd@57-10.0.0.76:22-10.0.0.1:58492.service: Deactivated successfully. May 16 16:45:14.305997 systemd[1]: session-58.scope: Deactivated successfully. May 16 16:45:14.306891 systemd-logind[1572]: Session 58 logged out. Waiting for processes to exit. May 16 16:45:14.310488 systemd[1]: Started sshd@58-10.0.0.76:22-10.0.0.1:58494.service - OpenSSH per-connection server daemon (10.0.0.1:58494). May 16 16:45:14.311043 systemd-logind[1572]: Removed session 58. May 16 16:45:14.356300 kubelet[2729]: I0516 16:45:14.356255 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/622051f7-eef1-4938-b946-0218a3e6b70c-cilium-cgroup\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356300 kubelet[2729]: I0516 16:45:14.356297 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/622051f7-eef1-4938-b946-0218a3e6b70c-lib-modules\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356300 kubelet[2729]: I0516 16:45:14.356312 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/622051f7-eef1-4938-b946-0218a3e6b70c-xtables-lock\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356478 kubelet[2729]: I0516 16:45:14.356338 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/622051f7-eef1-4938-b946-0218a3e6b70c-cilium-ipsec-secrets\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356478 kubelet[2729]: I0516 16:45:14.356356 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/622051f7-eef1-4938-b946-0218a3e6b70c-hubble-tls\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356478 kubelet[2729]: I0516 16:45:14.356387 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/622051f7-eef1-4938-b946-0218a3e6b70c-cilium-config-path\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356549 kubelet[2729]: I0516 16:45:14.356503 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqllq\" (UniqueName: \"kubernetes.io/projected/622051f7-eef1-4938-b946-0218a3e6b70c-kube-api-access-mqllq\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356549 kubelet[2729]: I0516 16:45:14.356541 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/622051f7-eef1-4938-b946-0218a3e6b70c-bpf-maps\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356589 kubelet[2729]: I0516 16:45:14.356566 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/622051f7-eef1-4938-b946-0218a3e6b70c-cilium-run\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356614 kubelet[2729]: I0516 16:45:14.356589 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/622051f7-eef1-4938-b946-0218a3e6b70c-hostproc\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356614 kubelet[2729]: I0516 16:45:14.356603 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/622051f7-eef1-4938-b946-0218a3e6b70c-cni-path\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356661 kubelet[2729]: I0516 16:45:14.356618 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/622051f7-eef1-4938-b946-0218a3e6b70c-host-proc-sys-kernel\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356661 kubelet[2729]: I0516 16:45:14.356635 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/622051f7-eef1-4938-b946-0218a3e6b70c-etc-cni-netd\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356661 kubelet[2729]: I0516 16:45:14.356650 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/622051f7-eef1-4938-b946-0218a3e6b70c-clustermesh-secrets\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.356726 kubelet[2729]: I0516 16:45:14.356664 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/622051f7-eef1-4938-b946-0218a3e6b70c-host-proc-sys-net\") pod \"cilium-gvcft\" (UID: \"622051f7-eef1-4938-b946-0218a3e6b70c\") " pod="kube-system/cilium-gvcft" May 16 16:45:14.357882 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 58494 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:45:14.359509 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:45:14.364108 systemd-logind[1572]: New session 59 of user core. May 16 16:45:14.375488 systemd[1]: Started session-59.scope - Session 59 of User core. May 16 16:45:14.490748 kubelet[2729]: E0516 16:45:14.490702 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:14.491919 containerd[1594]: time="2025-05-16T16:45:14.491868878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gvcft,Uid:622051f7-eef1-4938-b946-0218a3e6b70c,Namespace:kube-system,Attempt:0,}" May 16 16:45:14.508503 containerd[1594]: time="2025-05-16T16:45:14.508451587Z" level=info msg="connecting to shim 5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670" address="unix:///run/containerd/s/b24c57cd4e78bc4b1b644a9ba7af4de76e16d62e67ba99cc9e7569519f75fd0a" namespace=k8s.io protocol=ttrpc version=3 May 16 16:45:14.538563 systemd[1]: Started cri-containerd-5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670.scope - libcontainer container 5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670. May 16 16:45:14.569862 containerd[1594]: time="2025-05-16T16:45:14.569824654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gvcft,Uid:622051f7-eef1-4938-b946-0218a3e6b70c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670\"" May 16 16:45:14.570710 kubelet[2729]: E0516 16:45:14.570685 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:14.573202 containerd[1594]: time="2025-05-16T16:45:14.573132762Z" level=info msg="CreateContainer within sandbox \"5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 16:45:14.585971 containerd[1594]: time="2025-05-16T16:45:14.585875451Z" level=info msg="Container 60fca48ede18e2d84a145e4c111fe15cd1e126913559ed99bf07080930fab466: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:14.596655 containerd[1594]: time="2025-05-16T16:45:14.596607802Z" level=info msg="CreateContainer within sandbox \"5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"60fca48ede18e2d84a145e4c111fe15cd1e126913559ed99bf07080930fab466\"" May 16 16:45:14.597212 containerd[1594]: time="2025-05-16T16:45:14.597158632Z" level=info msg="StartContainer for \"60fca48ede18e2d84a145e4c111fe15cd1e126913559ed99bf07080930fab466\"" May 16 16:45:14.598047 containerd[1594]: time="2025-05-16T16:45:14.597999392Z" level=info msg="connecting to shim 60fca48ede18e2d84a145e4c111fe15cd1e126913559ed99bf07080930fab466" address="unix:///run/containerd/s/b24c57cd4e78bc4b1b644a9ba7af4de76e16d62e67ba99cc9e7569519f75fd0a" protocol=ttrpc version=3 May 16 16:45:14.622564 systemd[1]: Started cri-containerd-60fca48ede18e2d84a145e4c111fe15cd1e126913559ed99bf07080930fab466.scope - libcontainer container 60fca48ede18e2d84a145e4c111fe15cd1e126913559ed99bf07080930fab466. May 16 16:45:14.654461 containerd[1594]: time="2025-05-16T16:45:14.654412588Z" level=info msg="StartContainer for \"60fca48ede18e2d84a145e4c111fe15cd1e126913559ed99bf07080930fab466\" returns successfully" May 16 16:45:14.663655 systemd[1]: cri-containerd-60fca48ede18e2d84a145e4c111fe15cd1e126913559ed99bf07080930fab466.scope: Deactivated successfully. May 16 16:45:14.664859 containerd[1594]: time="2025-05-16T16:45:14.664801535Z" level=info msg="received exit event container_id:\"60fca48ede18e2d84a145e4c111fe15cd1e126913559ed99bf07080930fab466\" id:\"60fca48ede18e2d84a145e4c111fe15cd1e126913559ed99bf07080930fab466\" pid:5113 exited_at:{seconds:1747413914 nanos:664414257}" May 16 16:45:14.664979 containerd[1594]: time="2025-05-16T16:45:14.664808339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60fca48ede18e2d84a145e4c111fe15cd1e126913559ed99bf07080930fab466\" id:\"60fca48ede18e2d84a145e4c111fe15cd1e126913559ed99bf07080930fab466\" pid:5113 exited_at:{seconds:1747413914 nanos:664414257}" May 16 16:45:14.811087 kubelet[2729]: E0516 16:45:14.811059 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:14.813297 containerd[1594]: time="2025-05-16T16:45:14.812961782Z" level=info msg="CreateContainer within sandbox \"5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 16:45:14.819830 containerd[1594]: time="2025-05-16T16:45:14.819786425Z" level=info msg="Container 918663daefc52c5282dfdc54ebdbf43430afce10314efc9e04c0f34677b689da: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:14.827362 containerd[1594]: time="2025-05-16T16:45:14.827303936Z" level=info msg="CreateContainer within sandbox \"5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"918663daefc52c5282dfdc54ebdbf43430afce10314efc9e04c0f34677b689da\"" May 16 16:45:14.827809 containerd[1594]: time="2025-05-16T16:45:14.827780904Z" level=info msg="StartContainer for \"918663daefc52c5282dfdc54ebdbf43430afce10314efc9e04c0f34677b689da\"" May 16 16:45:14.828710 containerd[1594]: time="2025-05-16T16:45:14.828685998Z" level=info msg="connecting to shim 918663daefc52c5282dfdc54ebdbf43430afce10314efc9e04c0f34677b689da" address="unix:///run/containerd/s/b24c57cd4e78bc4b1b644a9ba7af4de76e16d62e67ba99cc9e7569519f75fd0a" protocol=ttrpc version=3 May 16 16:45:14.852500 systemd[1]: Started cri-containerd-918663daefc52c5282dfdc54ebdbf43430afce10314efc9e04c0f34677b689da.scope - libcontainer container 918663daefc52c5282dfdc54ebdbf43430afce10314efc9e04c0f34677b689da. May 16 16:45:14.879968 containerd[1594]: time="2025-05-16T16:45:14.879863916Z" level=info msg="StartContainer for \"918663daefc52c5282dfdc54ebdbf43430afce10314efc9e04c0f34677b689da\" returns successfully" May 16 16:45:14.885145 systemd[1]: cri-containerd-918663daefc52c5282dfdc54ebdbf43430afce10314efc9e04c0f34677b689da.scope: Deactivated successfully. May 16 16:45:14.885800 containerd[1594]: time="2025-05-16T16:45:14.885752256Z" level=info msg="received exit event container_id:\"918663daefc52c5282dfdc54ebdbf43430afce10314efc9e04c0f34677b689da\" id:\"918663daefc52c5282dfdc54ebdbf43430afce10314efc9e04c0f34677b689da\" pid:5159 exited_at:{seconds:1747413914 nanos:885422117}" May 16 16:45:14.886077 containerd[1594]: time="2025-05-16T16:45:14.885777574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"918663daefc52c5282dfdc54ebdbf43430afce10314efc9e04c0f34677b689da\" id:\"918663daefc52c5282dfdc54ebdbf43430afce10314efc9e04c0f34677b689da\" pid:5159 exited_at:{seconds:1747413914 nanos:885422117}" May 16 16:45:15.071243 kubelet[2729]: I0516 16:45:15.071201 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcce061b-d8de-4286-998a-b00bc4f7fefd" path="/var/lib/kubelet/pods/dcce061b-d8de-4286-998a-b00bc4f7fefd/volumes" May 16 16:45:15.155483 kubelet[2729]: E0516 16:45:15.155416 2729 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 16:45:15.462493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633067521.mount: Deactivated successfully. May 16 16:45:15.814086 kubelet[2729]: E0516 16:45:15.813968 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:15.815515 containerd[1594]: time="2025-05-16T16:45:15.815477353Z" level=info msg="CreateContainer within sandbox \"5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 16:45:16.185638 containerd[1594]: time="2025-05-16T16:45:16.185343416Z" level=info msg="Container 7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:16.415286 containerd[1594]: time="2025-05-16T16:45:16.415249969Z" level=info msg="CreateContainer within sandbox \"5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519\"" May 16 16:45:16.415743 containerd[1594]: time="2025-05-16T16:45:16.415720074Z" level=info msg="StartContainer for \"7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519\"" May 16 16:45:16.417016 containerd[1594]: time="2025-05-16T16:45:16.416993598Z" level=info msg="connecting to shim 7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519" address="unix:///run/containerd/s/b24c57cd4e78bc4b1b644a9ba7af4de76e16d62e67ba99cc9e7569519f75fd0a" protocol=ttrpc version=3 May 16 16:45:16.436518 systemd[1]: Started cri-containerd-7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519.scope - libcontainer container 7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519. May 16 16:45:16.508667 systemd[1]: cri-containerd-7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519.scope: Deactivated successfully. May 16 16:45:16.509750 containerd[1594]: time="2025-05-16T16:45:16.509719455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519\" id:\"7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519\" pid:5202 exited_at:{seconds:1747413916 nanos:509365101}" May 16 16:45:16.531754 containerd[1594]: time="2025-05-16T16:45:16.531705079Z" level=info msg="received exit event container_id:\"7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519\" id:\"7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519\" pid:5202 exited_at:{seconds:1747413916 nanos:509365101}" May 16 16:45:16.539855 containerd[1594]: time="2025-05-16T16:45:16.539814663Z" level=info msg="StartContainer for \"7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519\" returns successfully" May 16 16:45:16.553950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a63fcab538d609fb4acb1624e0a01bf199ee46308a524f6b9d1e71e55b19519-rootfs.mount: Deactivated successfully. May 16 16:45:16.818741 kubelet[2729]: E0516 16:45:16.818694 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:16.820291 containerd[1594]: time="2025-05-16T16:45:16.820249523Z" level=info msg="CreateContainer within sandbox \"5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 16:45:16.994146 containerd[1594]: time="2025-05-16T16:45:16.994100840Z" level=info msg="Container 8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:17.144314 containerd[1594]: time="2025-05-16T16:45:17.144220525Z" level=info msg="CreateContainer within sandbox \"5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f\"" May 16 16:45:17.144747 containerd[1594]: time="2025-05-16T16:45:17.144681313Z" level=info msg="StartContainer for \"8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f\"" May 16 16:45:17.145557 containerd[1594]: time="2025-05-16T16:45:17.145528434Z" level=info msg="connecting to shim 8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f" address="unix:///run/containerd/s/b24c57cd4e78bc4b1b644a9ba7af4de76e16d62e67ba99cc9e7569519f75fd0a" protocol=ttrpc version=3 May 16 16:45:17.170557 systemd[1]: Started cri-containerd-8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f.scope - libcontainer container 8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f. May 16 16:45:17.195857 systemd[1]: cri-containerd-8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f.scope: Deactivated successfully. May 16 16:45:17.196683 containerd[1594]: time="2025-05-16T16:45:17.196644507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f\" id:\"8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f\" pid:5242 exited_at:{seconds:1747413917 nanos:196240116}" May 16 16:45:17.261186 containerd[1594]: time="2025-05-16T16:45:17.261129190Z" level=info msg="received exit event container_id:\"8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f\" id:\"8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f\" pid:5242 exited_at:{seconds:1747413917 nanos:196240116}" May 16 16:45:17.269510 containerd[1594]: time="2025-05-16T16:45:17.269463819Z" level=info msg="StartContainer for \"8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f\" returns successfully" May 16 16:45:17.282564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b483bd7662525b19fb2be2be662078bdce134229f5e46a010de7c237ce5bc5f-rootfs.mount: Deactivated successfully. May 16 16:45:17.823923 kubelet[2729]: E0516 16:45:17.823882 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:17.826289 containerd[1594]: time="2025-05-16T16:45:17.826249066Z" level=info msg="CreateContainer within sandbox \"5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 16:45:17.841731 containerd[1594]: time="2025-05-16T16:45:17.841652468Z" level=info msg="Container d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:17.857937 containerd[1594]: time="2025-05-16T16:45:17.857889246Z" level=info msg="CreateContainer within sandbox \"5aed23cc7930f79fd8c9a76dc1db897e52fa4c491770484084a643548b8e0670\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9\"" May 16 16:45:17.861125 containerd[1594]: time="2025-05-16T16:45:17.861059108Z" level=info msg="StartContainer for \"d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9\"" May 16 16:45:17.862146 containerd[1594]: time="2025-05-16T16:45:17.862110669Z" level=info msg="connecting to shim d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9" address="unix:///run/containerd/s/b24c57cd4e78bc4b1b644a9ba7af4de76e16d62e67ba99cc9e7569519f75fd0a" protocol=ttrpc version=3 May 16 16:45:17.890556 systemd[1]: Started cri-containerd-d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9.scope - libcontainer container d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9. May 16 16:45:17.924554 containerd[1594]: time="2025-05-16T16:45:17.924494273Z" level=info msg="StartContainer for \"d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9\" returns successfully" May 16 16:45:17.993846 containerd[1594]: time="2025-05-16T16:45:17.993807023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9\" id:\"57db79da6c74bd4cc0dcbd45f47eb4c0e6691a638a1381046f6c984100fc5fda\" pid:5310 exited_at:{seconds:1747413917 nanos:993450114}" May 16 16:45:18.359404 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 16 16:45:18.829986 kubelet[2729]: E0516 16:45:18.829954 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:18.844288 kubelet[2729]: I0516 16:45:18.844013 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gvcft" podStartSLOduration=4.843990511 podStartE2EDuration="4.843990511s" podCreationTimestamp="2025-05-16 16:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:45:18.84266539 +0000 UTC m=+253.855644933" watchObservedRunningTime="2025-05-16 16:45:18.843990511 +0000 UTC m=+253.856970044" May 16 16:45:20.492292 kubelet[2729]: E0516 16:45:20.492209 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:20.891010 containerd[1594]: time="2025-05-16T16:45:20.890887774Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9\" id:\"78c3cadef90bb5db9aa4e57741337b43fa64629d23155ac6e402bdec9584c7d4\" pid:5651 exit_status:1 exited_at:{seconds:1747413920 nanos:890504545}" May 16 16:45:21.553849 systemd-networkd[1497]: lxc_health: Link UP May 16 16:45:21.561180 systemd-networkd[1497]: lxc_health: Gained carrier May 16 16:45:22.493243 kubelet[2729]: E0516 16:45:22.492804 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:22.838262 kubelet[2729]: E0516 16:45:22.837931 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:22.993626 systemd-networkd[1497]: lxc_health: Gained IPv6LL May 16 16:45:23.182669 containerd[1594]: time="2025-05-16T16:45:23.182537167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9\" id:\"5038411c741c6fbd5f7c4b0b5f002a543e358791b9e649f05b2565d4f3cde42a\" pid:5847 exited_at:{seconds:1747413923 nanos:182200206}" May 16 16:45:23.847265 kubelet[2729]: E0516 16:45:23.847199 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:25.489838 containerd[1594]: time="2025-05-16T16:45:25.489783718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9\" id:\"7d0a6c3c83020cecb14038cc86d439e46f8d7aa2adab454a87f108c798b0f7a2\" pid:5882 exited_at:{seconds:1747413925 nanos:489243902}" May 16 16:45:27.069550 kubelet[2729]: E0516 16:45:27.069509 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:27.579697 containerd[1594]: time="2025-05-16T16:45:27.579641439Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9\" id:\"ee4ed6c63145bd7db30441d3326935904ff80bdd59b0929dbd529ac4b681b6c6\" pid:5905 exited_at:{seconds:1747413927 nanos:579168341}" May 16 16:45:29.069721 kubelet[2729]: E0516 16:45:29.069577 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:29.679356 containerd[1594]: time="2025-05-16T16:45:29.679281309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0ebbeff3b98f1917c76e812342ddd1a2e720bc5bd77617027c51e2f0bbb2ca9\" id:\"6fb30e0e796f34bd86f1d9e835e8705fe64be15369e31aee2cf3d021a21432ab\" pid:5930 exited_at:{seconds:1747413929 nanos:678905053}" May 16 16:45:29.685553 sshd[5044]: Connection closed by 10.0.0.1 port 58494 May 16 16:45:29.685992 sshd-session[5042]: pam_unix(sshd:session): session closed for user core May 16 16:45:29.690472 systemd[1]: sshd@58-10.0.0.76:22-10.0.0.1:58494.service: Deactivated successfully. May 16 16:45:29.692571 systemd[1]: session-59.scope: Deactivated successfully. May 16 16:45:29.693645 systemd-logind[1572]: Session 59 logged out. Waiting for processes to exit. May 16 16:45:29.694834 systemd-logind[1572]: Removed session 59.