May 14 18:02:02.831946 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 16:37:27 -00 2025 May 14 18:02:02.831967 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:02:02.831978 kernel: BIOS-provided physical RAM map: May 14 18:02:02.831984 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 18:02:02.831991 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 18:02:02.831997 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 18:02:02.832005 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 14 18:02:02.832011 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 18:02:02.832020 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 18:02:02.832026 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 18:02:02.832033 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 14 18:02:02.832039 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 18:02:02.832045 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 18:02:02.832052 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 18:02:02.832062 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 18:02:02.832069 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 18:02:02.832076 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 14 18:02:02.832083 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 14 18:02:02.832090 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 14 18:02:02.832097 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 14 18:02:02.832104 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 18:02:02.832111 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 18:02:02.832117 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 18:02:02.832124 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 18:02:02.832131 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 18:02:02.832140 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 18:02:02.832147 kernel: NX (Execute Disable) protection: active May 14 18:02:02.832154 kernel: APIC: Static calls initialized May 14 18:02:02.832161 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 14 18:02:02.832168 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 14 18:02:02.832175 kernel: extended physical RAM map: May 14 18:02:02.832182 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 18:02:02.832189 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 18:02:02.832196 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 18:02:02.832203 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 14 18:02:02.832210 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 18:02:02.832219 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 18:02:02.832226 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 18:02:02.832233 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 14 18:02:02.832240 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 14 18:02:02.832250 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 14 18:02:02.832257 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 14 18:02:02.832266 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 14 18:02:02.832273 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 18:02:02.832280 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 18:02:02.832288 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 18:02:02.832295 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 18:02:02.832302 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 18:02:02.832309 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 14 18:02:02.832316 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 14 18:02:02.832324 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 14 18:02:02.832333 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 14 18:02:02.832340 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 18:02:02.832347 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 18:02:02.832355 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 18:02:02.832362 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 18:02:02.832380 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 18:02:02.832398 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 18:02:02.832406 kernel: efi: EFI v2.7 by EDK II May 14 18:02:02.832413 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 14 18:02:02.832420 kernel: random: crng init done May 14 18:02:02.832428 kernel: efi: Remove mem149: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 14 18:02:02.832435 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 14 18:02:02.832445 kernel: secureboot: Secure boot disabled May 14 18:02:02.832452 kernel: SMBIOS 2.8 present. May 14 18:02:02.832459 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 14 18:02:02.832466 kernel: DMI: Memory slots populated: 1/1 May 14 18:02:02.832474 kernel: Hypervisor detected: KVM May 14 18:02:02.832481 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 18:02:02.832488 kernel: kvm-clock: using sched offset of 3944134929 cycles May 14 18:02:02.832496 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 18:02:02.832503 kernel: tsc: Detected 2794.746 MHz processor May 14 18:02:02.832511 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 18:02:02.832519 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 18:02:02.832528 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 14 18:02:02.832536 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 14 18:02:02.832544 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 18:02:02.832551 kernel: Using GB pages for direct mapping May 14 18:02:02.832559 kernel: ACPI: Early table checksum verification disabled May 14 18:02:02.832566 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 14 18:02:02.832574 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 14 18:02:02.832581 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:02:02.832589 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:02:02.832598 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 14 18:02:02.832605 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:02:02.832613 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:02:02.832620 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:02:02.832628 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:02:02.832635 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 14 18:02:02.832643 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 14 18:02:02.832650 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 14 18:02:02.832660 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 14 18:02:02.832667 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 14 18:02:02.832674 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 14 18:02:02.832682 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 14 18:02:02.832689 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 14 18:02:02.832697 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 14 18:02:02.832704 kernel: No NUMA configuration found May 14 18:02:02.832711 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 14 18:02:02.832719 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 14 18:02:02.832726 kernel: Zone ranges: May 14 18:02:02.832735 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 18:02:02.832743 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 14 18:02:02.832750 kernel: Normal empty May 14 18:02:02.832757 kernel: Device empty May 14 18:02:02.832765 kernel: Movable zone start for each node May 14 18:02:02.832772 kernel: Early memory node ranges May 14 18:02:02.832779 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 14 18:02:02.832787 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 14 18:02:02.832794 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 14 18:02:02.832804 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 14 18:02:02.832814 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 14 18:02:02.832821 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 14 18:02:02.832828 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 14 18:02:02.832835 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 14 18:02:02.832843 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 14 18:02:02.832858 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 18:02:02.832866 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 14 18:02:02.832882 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 14 18:02:02.832890 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 18:02:02.832898 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 14 18:02:02.832906 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 14 18:02:02.832915 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 14 18:02:02.832923 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 14 18:02:02.832930 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 14 18:02:02.832938 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 18:02:02.832946 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 18:02:02.832955 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 18:02:02.832963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 18:02:02.832971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 18:02:02.832978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 18:02:02.832986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 18:02:02.832994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 18:02:02.833001 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 18:02:02.833009 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 18:02:02.833017 kernel: TSC deadline timer available May 14 18:02:02.833026 kernel: CPU topo: Max. logical packages: 1 May 14 18:02:02.833034 kernel: CPU topo: Max. logical dies: 1 May 14 18:02:02.833041 kernel: CPU topo: Max. dies per package: 1 May 14 18:02:02.833049 kernel: CPU topo: Max. threads per core: 1 May 14 18:02:02.833057 kernel: CPU topo: Num. cores per package: 4 May 14 18:02:02.833064 kernel: CPU topo: Num. threads per package: 4 May 14 18:02:02.833072 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 14 18:02:02.833079 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 18:02:02.833087 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 18:02:02.833094 kernel: kvm-guest: setup PV sched yield May 14 18:02:02.833104 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 14 18:02:02.833112 kernel: Booting paravirtualized kernel on KVM May 14 18:02:02.833119 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 18:02:02.833127 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 14 18:02:02.833135 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 14 18:02:02.833143 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 14 18:02:02.833151 kernel: pcpu-alloc: [0] 0 1 2 3 May 14 18:02:02.833158 kernel: kvm-guest: PV spinlocks enabled May 14 18:02:02.833166 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 18:02:02.833177 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:02:02.833185 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 18:02:02.833193 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 18:02:02.833201 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 18:02:02.833208 kernel: Fallback order for Node 0: 0 May 14 18:02:02.833216 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 14 18:02:02.833224 kernel: Policy zone: DMA32 May 14 18:02:02.833231 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 18:02:02.833241 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 18:02:02.833248 kernel: ftrace: allocating 40065 entries in 157 pages May 14 18:02:02.833256 kernel: ftrace: allocated 157 pages with 5 groups May 14 18:02:02.833264 kernel: Dynamic Preempt: voluntary May 14 18:02:02.833271 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 18:02:02.833280 kernel: rcu: RCU event tracing is enabled. May 14 18:02:02.833288 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 18:02:02.833295 kernel: Trampoline variant of Tasks RCU enabled. May 14 18:02:02.833303 kernel: Rude variant of Tasks RCU enabled. May 14 18:02:02.833313 kernel: Tracing variant of Tasks RCU enabled. May 14 18:02:02.833321 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 18:02:02.833328 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 18:02:02.833336 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:02:02.833344 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:02:02.833352 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:02:02.833359 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 14 18:02:02.833386 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 18:02:02.833394 kernel: Console: colour dummy device 80x25 May 14 18:02:02.833403 kernel: printk: legacy console [ttyS0] enabled May 14 18:02:02.833411 kernel: ACPI: Core revision 20240827 May 14 18:02:02.833419 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 18:02:02.833427 kernel: APIC: Switch to symmetric I/O mode setup May 14 18:02:02.833434 kernel: x2apic enabled May 14 18:02:02.833442 kernel: APIC: Switched APIC routing to: physical x2apic May 14 18:02:02.833450 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 18:02:02.833458 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 18:02:02.833465 kernel: kvm-guest: setup PV IPIs May 14 18:02:02.833475 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 18:02:02.833483 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns May 14 18:02:02.833491 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 14 18:02:02.833499 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 18:02:02.833506 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 18:02:02.833514 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 18:02:02.833522 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 18:02:02.833529 kernel: Spectre V2 : Mitigation: Retpolines May 14 18:02:02.833537 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 14 18:02:02.833547 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 14 18:02:02.833555 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 18:02:02.833562 kernel: RETBleed: Mitigation: untrained return thunk May 14 18:02:02.833571 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 18:02:02.833579 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 18:02:02.833587 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 18:02:02.833595 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 18:02:02.833603 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 18:02:02.833612 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 18:02:02.833620 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 18:02:02.833628 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 18:02:02.833635 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 18:02:02.833643 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 18:02:02.833651 kernel: Freeing SMP alternatives memory: 32K May 14 18:02:02.833658 kernel: pid_max: default: 32768 minimum: 301 May 14 18:02:02.833666 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 18:02:02.833673 kernel: landlock: Up and running. May 14 18:02:02.833683 kernel: SELinux: Initializing. May 14 18:02:02.833691 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:02:02.833698 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:02:02.833706 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 18:02:02.833714 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 18:02:02.833722 kernel: ... version: 0 May 14 18:02:02.833729 kernel: ... bit width: 48 May 14 18:02:02.833737 kernel: ... generic registers: 6 May 14 18:02:02.833744 kernel: ... value mask: 0000ffffffffffff May 14 18:02:02.833754 kernel: ... max period: 00007fffffffffff May 14 18:02:02.833761 kernel: ... fixed-purpose events: 0 May 14 18:02:02.833769 kernel: ... event mask: 000000000000003f May 14 18:02:02.833777 kernel: signal: max sigframe size: 1776 May 14 18:02:02.833784 kernel: rcu: Hierarchical SRCU implementation. May 14 18:02:02.833801 kernel: rcu: Max phase no-delay instances is 400. May 14 18:02:02.833809 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 18:02:02.833824 kernel: smp: Bringing up secondary CPUs ... May 14 18:02:02.833833 kernel: smpboot: x86: Booting SMP configuration: May 14 18:02:02.833855 kernel: .... node #0, CPUs: #1 #2 #3 May 14 18:02:02.833868 kernel: smp: Brought up 1 node, 4 CPUs May 14 18:02:02.833876 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 14 18:02:02.833884 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54424K init, 2536K bss, 137196K reserved, 0K cma-reserved) May 14 18:02:02.833894 kernel: devtmpfs: initialized May 14 18:02:02.833902 kernel: x86/mm: Memory block size: 128MB May 14 18:02:02.833913 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 14 18:02:02.833921 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 14 18:02:02.833929 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 14 18:02:02.833939 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 14 18:02:02.833946 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 14 18:02:02.833954 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 14 18:02:02.833962 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 18:02:02.833970 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 18:02:02.833978 kernel: pinctrl core: initialized pinctrl subsystem May 14 18:02:02.833985 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 18:02:02.833993 kernel: audit: initializing netlink subsys (disabled) May 14 18:02:02.834001 kernel: audit: type=2000 audit(1747245720.808:1): state=initialized audit_enabled=0 res=1 May 14 18:02:02.834011 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 18:02:02.834019 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 18:02:02.834026 kernel: cpuidle: using governor menu May 14 18:02:02.834034 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 18:02:02.834042 kernel: dca service started, version 1.12.1 May 14 18:02:02.834050 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 14 18:02:02.834057 kernel: PCI: Using configuration type 1 for base access May 14 18:02:02.834065 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 18:02:02.834073 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 18:02:02.834083 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 18:02:02.834091 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 18:02:02.834098 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 18:02:02.834106 kernel: ACPI: Added _OSI(Module Device) May 14 18:02:02.834114 kernel: ACPI: Added _OSI(Processor Device) May 14 18:02:02.834121 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 18:02:02.834129 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 18:02:02.834137 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 18:02:02.834145 kernel: ACPI: Interpreter enabled May 14 18:02:02.834154 kernel: ACPI: PM: (supports S0 S3 S5) May 14 18:02:02.834162 kernel: ACPI: Using IOAPIC for interrupt routing May 14 18:02:02.834169 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 18:02:02.834177 kernel: PCI: Using E820 reservations for host bridge windows May 14 18:02:02.834185 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 18:02:02.834193 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 18:02:02.834384 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 18:02:02.834506 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 18:02:02.834625 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 18:02:02.834635 kernel: PCI host bridge to bus 0000:00 May 14 18:02:02.834751 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 18:02:02.834876 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 18:02:02.835016 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 18:02:02.835122 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 14 18:02:02.835225 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 14 18:02:02.835332 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 14 18:02:02.835452 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 18:02:02.835581 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 14 18:02:02.835705 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 14 18:02:02.835820 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 14 18:02:02.835961 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 14 18:02:02.836116 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 14 18:02:02.836281 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 18:02:02.836440 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 18:02:02.836558 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 14 18:02:02.836673 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 14 18:02:02.836788 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 14 18:02:02.836924 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 14 18:02:02.837046 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 14 18:02:02.837163 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 14 18:02:02.837278 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 14 18:02:02.837425 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 18:02:02.837587 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 14 18:02:02.837705 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 14 18:02:02.837823 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 14 18:02:02.837956 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 14 18:02:02.838080 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 14 18:02:02.838196 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 18:02:02.838317 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 14 18:02:02.838502 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 14 18:02:02.838626 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 14 18:02:02.838753 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 14 18:02:02.838877 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 14 18:02:02.838889 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 18:02:02.838897 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 18:02:02.838905 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 18:02:02.838913 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 18:02:02.838920 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 18:02:02.838928 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 18:02:02.838939 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 18:02:02.838946 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 18:02:02.838954 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 18:02:02.838962 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 18:02:02.838969 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 18:02:02.838977 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 18:02:02.838985 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 18:02:02.838992 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 18:02:02.839000 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 18:02:02.839009 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 18:02:02.839017 kernel: iommu: Default domain type: Translated May 14 18:02:02.839025 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 18:02:02.839032 kernel: efivars: Registered efivars operations May 14 18:02:02.839040 kernel: PCI: Using ACPI for IRQ routing May 14 18:02:02.839047 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 18:02:02.839055 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 14 18:02:02.839062 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 14 18:02:02.839070 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 14 18:02:02.839077 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 14 18:02:02.839087 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 14 18:02:02.839094 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 14 18:02:02.839102 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 14 18:02:02.839109 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 14 18:02:02.839224 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 18:02:02.839337 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 18:02:02.839492 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 18:02:02.839507 kernel: vgaarb: loaded May 14 18:02:02.839516 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 18:02:02.839523 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 18:02:02.839531 kernel: clocksource: Switched to clocksource kvm-clock May 14 18:02:02.839539 kernel: VFS: Disk quotas dquot_6.6.0 May 14 18:02:02.839546 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 18:02:02.839554 kernel: pnp: PnP ACPI init May 14 18:02:02.839691 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 14 18:02:02.839708 kernel: pnp: PnP ACPI: found 6 devices May 14 18:02:02.839716 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 18:02:02.839724 kernel: NET: Registered PF_INET protocol family May 14 18:02:02.839732 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 18:02:02.839740 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 18:02:02.839748 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 18:02:02.839756 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 18:02:02.839764 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 18:02:02.839772 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 18:02:02.839783 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:02:02.839791 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:02:02.839799 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 18:02:02.839806 kernel: NET: Registered PF_XDP protocol family May 14 18:02:02.839958 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 14 18:02:02.840095 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 14 18:02:02.840202 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 18:02:02.840307 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 18:02:02.840430 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 18:02:02.840535 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 14 18:02:02.840639 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 14 18:02:02.840743 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 14 18:02:02.840755 kernel: PCI: CLS 0 bytes, default 64 May 14 18:02:02.840766 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns May 14 18:02:02.840776 kernel: Initialise system trusted keyrings May 14 18:02:02.840791 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 18:02:02.840801 kernel: Key type asymmetric registered May 14 18:02:02.840811 kernel: Asymmetric key parser 'x509' registered May 14 18:02:02.840821 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 18:02:02.840831 kernel: io scheduler mq-deadline registered May 14 18:02:02.840843 kernel: io scheduler kyber registered May 14 18:02:02.840864 kernel: io scheduler bfq registered May 14 18:02:02.840876 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 18:02:02.840887 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 18:02:02.840897 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 18:02:02.840906 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 14 18:02:02.840914 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 18:02:02.840923 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 18:02:02.840931 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 18:02:02.840939 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 18:02:02.840947 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 18:02:02.841070 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 18:02:02.841083 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 14 18:02:02.841192 kernel: rtc_cmos 00:04: registered as rtc0 May 14 18:02:02.841299 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T18:02:02 UTC (1747245722) May 14 18:02:02.841425 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 14 18:02:02.841436 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 18:02:02.841445 kernel: efifb: probing for efifb May 14 18:02:02.841453 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 14 18:02:02.841464 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 14 18:02:02.841472 kernel: efifb: scrolling: redraw May 14 18:02:02.841480 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 18:02:02.841488 kernel: Console: switching to colour frame buffer device 160x50 May 14 18:02:02.841496 kernel: fb0: EFI VGA frame buffer device May 14 18:02:02.841504 kernel: pstore: Using crash dump compression: deflate May 14 18:02:02.841512 kernel: pstore: Registered efi_pstore as persistent store backend May 14 18:02:02.841520 kernel: NET: Registered PF_INET6 protocol family May 14 18:02:02.841528 kernel: Segment Routing with IPv6 May 14 18:02:02.841538 kernel: In-situ OAM (IOAM) with IPv6 May 14 18:02:02.841546 kernel: NET: Registered PF_PACKET protocol family May 14 18:02:02.841554 kernel: Key type dns_resolver registered May 14 18:02:02.841562 kernel: IPI shorthand broadcast: enabled May 14 18:02:02.841570 kernel: sched_clock: Marking stable (2822003435, 161394073)->(3003335403, -19937895) May 14 18:02:02.841578 kernel: registered taskstats version 1 May 14 18:02:02.841586 kernel: Loading compiled-in X.509 certificates May 14 18:02:02.841594 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 41e2a150aa08ec2528be2394819b3db677e5f4ef' May 14 18:02:02.841602 kernel: Demotion targets for Node 0: null May 14 18:02:02.841610 kernel: Key type .fscrypt registered May 14 18:02:02.841620 kernel: Key type fscrypt-provisioning registered May 14 18:02:02.841628 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 18:02:02.841636 kernel: ima: Allocated hash algorithm: sha1 May 14 18:02:02.841644 kernel: ima: No architecture policies found May 14 18:02:02.841652 kernel: clk: Disabling unused clocks May 14 18:02:02.841660 kernel: Warning: unable to open an initial console. May 14 18:02:02.841668 kernel: Freeing unused kernel image (initmem) memory: 54424K May 14 18:02:02.841676 kernel: Write protecting the kernel read-only data: 24576k May 14 18:02:02.841686 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 14 18:02:02.841694 kernel: Run /init as init process May 14 18:02:02.841702 kernel: with arguments: May 14 18:02:02.841710 kernel: /init May 14 18:02:02.841718 kernel: with environment: May 14 18:02:02.841726 kernel: HOME=/ May 14 18:02:02.841733 kernel: TERM=linux May 14 18:02:02.841741 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 18:02:02.841750 systemd[1]: Successfully made /usr/ read-only. May 14 18:02:02.841763 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:02:02.841772 systemd[1]: Detected virtualization kvm. May 14 18:02:02.841781 systemd[1]: Detected architecture x86-64. May 14 18:02:02.841789 systemd[1]: Running in initrd. May 14 18:02:02.841798 systemd[1]: No hostname configured, using default hostname. May 14 18:02:02.841806 systemd[1]: Hostname set to . May 14 18:02:02.841814 systemd[1]: Initializing machine ID from VM UUID. May 14 18:02:02.841825 systemd[1]: Queued start job for default target initrd.target. May 14 18:02:02.841834 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:02:02.841842 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:02:02.841859 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 18:02:02.841868 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:02:02.841880 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 18:02:02.841889 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 18:02:02.841902 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 18:02:02.841914 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 18:02:02.841924 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:02:02.841933 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:02:02.841942 systemd[1]: Reached target paths.target - Path Units. May 14 18:02:02.841950 systemd[1]: Reached target slices.target - Slice Units. May 14 18:02:02.841959 systemd[1]: Reached target swap.target - Swaps. May 14 18:02:02.841967 systemd[1]: Reached target timers.target - Timer Units. May 14 18:02:02.841978 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:02:02.841986 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:02:02.841995 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 18:02:02.842004 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 18:02:02.842012 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:02:02.842021 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:02:02.842029 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:02:02.842038 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:02:02.842046 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 18:02:02.842057 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:02:02.842066 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 18:02:02.842075 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 18:02:02.842083 systemd[1]: Starting systemd-fsck-usr.service... May 14 18:02:02.842092 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:02:02.842100 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:02:02.842109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:02:02.842117 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 18:02:02.842129 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:02:02.842137 systemd[1]: Finished systemd-fsck-usr.service. May 14 18:02:02.842166 systemd-journald[220]: Collecting audit messages is disabled. May 14 18:02:02.842188 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:02:02.842198 systemd-journald[220]: Journal started May 14 18:02:02.842217 systemd-journald[220]: Runtime Journal (/run/log/journal/53c1912212a348f3804f7d9645d7ad13) is 6M, max 48.5M, 42.4M free. May 14 18:02:02.836321 systemd-modules-load[222]: Inserted module 'overlay' May 14 18:02:02.845394 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:02.848386 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:02:02.852512 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 18:02:02.864393 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 18:02:02.865695 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:02:02.872439 kernel: Bridge firewalling registered May 14 18:02:02.866124 systemd-modules-load[222]: Inserted module 'br_netfilter' May 14 18:02:02.867427 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:02:02.868627 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:02:02.870423 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:02:02.873254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:02:02.877958 systemd-tmpfiles[240]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 18:02:02.882976 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:02:02.889241 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:02:02.890743 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:02:02.893097 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:02:02.896705 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 18:02:02.899113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:02:02.924497 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:02:02.944560 systemd-resolved[262]: Positive Trust Anchors: May 14 18:02:02.944572 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:02:02.944603 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:02:02.947020 systemd-resolved[262]: Defaulting to hostname 'linux'. May 14 18:02:02.953296 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:02:02.954562 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:02:03.033397 kernel: SCSI subsystem initialized May 14 18:02:03.043397 kernel: Loading iSCSI transport class v2.0-870. May 14 18:02:03.054397 kernel: iscsi: registered transport (tcp) May 14 18:02:03.075641 kernel: iscsi: registered transport (qla4xxx) May 14 18:02:03.075675 kernel: QLogic iSCSI HBA Driver May 14 18:02:03.095169 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:02:03.111305 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:02:03.115067 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:02:03.163401 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 18:02:03.164967 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 18:02:03.225414 kernel: raid6: avx2x4 gen() 29900 MB/s May 14 18:02:03.242394 kernel: raid6: avx2x2 gen() 30518 MB/s May 14 18:02:03.259488 kernel: raid6: avx2x1 gen() 25502 MB/s May 14 18:02:03.259511 kernel: raid6: using algorithm avx2x2 gen() 30518 MB/s May 14 18:02:03.277509 kernel: raid6: .... xor() 19602 MB/s, rmw enabled May 14 18:02:03.277542 kernel: raid6: using avx2x2 recovery algorithm May 14 18:02:03.298401 kernel: xor: automatically using best checksumming function avx May 14 18:02:03.466408 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 18:02:03.474854 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 18:02:03.477518 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:02:03.519281 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 14 18:02:03.525754 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:02:03.526707 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 18:02:03.554662 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation May 14 18:02:03.586931 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:02:03.588426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:02:03.967502 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:02:03.969948 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 18:02:04.015454 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 14 18:02:04.027540 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 18:02:04.027763 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 18:02:04.027789 kernel: GPT:9289727 != 19775487 May 14 18:02:04.027809 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 18:02:04.027837 kernel: GPT:9289727 != 19775487 May 14 18:02:04.027856 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 18:02:04.027877 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:02:04.032397 kernel: libata version 3.00 loaded. May 14 18:02:04.034399 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 14 18:02:04.036397 kernel: cryptd: max_cpu_qlen set to 1000 May 14 18:02:04.042472 kernel: ahci 0000:00:1f.2: version 3.0 May 14 18:02:04.075116 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 18:02:04.075134 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 14 18:02:04.075286 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 14 18:02:04.075444 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 18:02:04.075583 kernel: AES CTR mode by8 optimization enabled May 14 18:02:04.075595 kernel: scsi host0: ahci May 14 18:02:04.075740 kernel: scsi host1: ahci May 14 18:02:04.075888 kernel: scsi host2: ahci May 14 18:02:04.076068 kernel: scsi host3: ahci May 14 18:02:04.076206 kernel: scsi host4: ahci May 14 18:02:04.076339 kernel: scsi host5: ahci May 14 18:02:04.076500 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 14 18:02:04.076512 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 14 18:02:04.076522 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 14 18:02:04.076533 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 14 18:02:04.076543 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 14 18:02:04.076553 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 14 18:02:04.053941 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:02:04.054075 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:04.055548 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:02:04.057949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:02:04.061309 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:02:04.080390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:02:04.080541 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:04.106120 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:02:04.114434 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 18:02:04.123570 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 18:02:04.132771 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 18:02:04.134065 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 18:02:04.138139 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 18:02:04.141066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:02:04.175196 disk-uuid[623]: Primary Header is updated. May 14 18:02:04.175196 disk-uuid[623]: Secondary Entries is updated. May 14 18:02:04.175196 disk-uuid[623]: Secondary Header is updated. May 14 18:02:04.179421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:02:04.182890 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:04.186062 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:02:04.379411 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 18:02:04.379485 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 18:02:04.380393 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 18:02:04.381628 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 18:02:04.381708 kernel: ata3.00: applying bridge limits May 14 18:02:04.382394 kernel: ata3.00: configured for UDMA/100 May 14 18:02:04.384395 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 18:02:04.388391 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 18:02:04.388410 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 18:02:04.388421 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 18:02:04.443402 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 18:02:04.469401 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 18:02:04.469477 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 18:02:04.827656 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 18:02:04.829430 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:02:04.831102 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:02:04.832297 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:02:04.835336 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 18:02:04.872421 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 18:02:05.186418 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:02:05.186849 disk-uuid[626]: The operation has completed successfully. May 14 18:02:05.217842 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 18:02:05.217963 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 18:02:05.257644 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 18:02:05.288356 sh[665]: Success May 14 18:02:05.308642 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 18:02:05.308722 kernel: device-mapper: uevent: version 1.0.3 May 14 18:02:05.308737 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 18:02:05.319427 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 14 18:02:05.353460 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 18:02:05.357103 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 18:02:05.376660 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 18:02:05.383559 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 18:02:05.383590 kernel: BTRFS: device fsid dedcf745-d4ff-44ac-b61c-5ec1bad114c7 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (677) May 14 18:02:05.384399 kernel: BTRFS info (device dm-0): first mount of filesystem dedcf745-d4ff-44ac-b61c-5ec1bad114c7 May 14 18:02:05.385994 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 18:02:05.386017 kernel: BTRFS info (device dm-0): using free-space-tree May 14 18:02:05.391936 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 18:02:05.394672 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 18:02:05.397387 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 18:02:05.400627 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 18:02:05.404033 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 18:02:05.430396 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (708) May 14 18:02:05.430457 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:02:05.431892 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:02:05.431919 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:02:05.439401 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:02:05.440688 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 18:02:05.441862 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 18:02:05.527554 ignition[747]: Ignition 2.21.0 May 14 18:02:05.527569 ignition[747]: Stage: fetch-offline May 14 18:02:05.527602 ignition[747]: no configs at "/usr/lib/ignition/base.d" May 14 18:02:05.527611 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:02:05.527691 ignition[747]: parsed url from cmdline: "" May 14 18:02:05.527695 ignition[747]: no config URL provided May 14 18:02:05.527700 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:02:05.527708 ignition[747]: no config at "/usr/lib/ignition/user.ign" May 14 18:02:05.527732 ignition[747]: op(1): [started] loading QEMU firmware config module May 14 18:02:05.527737 ignition[747]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 18:02:05.537899 ignition[747]: op(1): [finished] loading QEMU firmware config module May 14 18:02:05.537986 ignition[747]: QEMU firmware config was not found. Ignoring... May 14 18:02:05.550986 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:02:05.552855 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:02:05.585595 ignition[747]: parsing config with SHA512: 48a7f9c095cf8ac4e079d1c59719b258eefec5d84543082cb130a17dcc0f24161af5cd3550e781ce199ce0d1fe1f94703bf95a0f52e6c983d844a83ac68a70c4 May 14 18:02:05.588829 unknown[747]: fetched base config from "system" May 14 18:02:05.589022 unknown[747]: fetched user config from "qemu" May 14 18:02:05.589724 ignition[747]: fetch-offline: fetch-offline passed May 14 18:02:05.589849 ignition[747]: Ignition finished successfully May 14 18:02:05.592921 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:02:05.599149 systemd-networkd[857]: lo: Link UP May 14 18:02:05.599158 systemd-networkd[857]: lo: Gained carrier May 14 18:02:05.600622 systemd-networkd[857]: Enumeration completed May 14 18:02:05.600974 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:02:05.600978 systemd-networkd[857]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:02:05.601650 systemd-networkd[857]: eth0: Link UP May 14 18:02:05.601653 systemd-networkd[857]: eth0: Gained carrier May 14 18:02:05.601661 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:02:05.602335 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:02:05.606440 systemd[1]: Reached target network.target - Network. May 14 18:02:05.608870 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 18:02:05.610733 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 18:02:05.629422 systemd-networkd[857]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:02:05.655298 ignition[861]: Ignition 2.21.0 May 14 18:02:05.655311 ignition[861]: Stage: kargs May 14 18:02:05.655688 ignition[861]: no configs at "/usr/lib/ignition/base.d" May 14 18:02:05.655699 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:02:05.660897 ignition[861]: kargs: kargs passed May 14 18:02:05.661044 ignition[861]: Ignition finished successfully May 14 18:02:05.666768 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 18:02:05.670335 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 18:02:05.712710 ignition[870]: Ignition 2.21.0 May 14 18:02:05.713124 ignition[870]: Stage: disks May 14 18:02:05.713994 ignition[870]: no configs at "/usr/lib/ignition/base.d" May 14 18:02:05.714008 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:02:05.716251 ignition[870]: disks: disks passed May 14 18:02:05.716308 ignition[870]: Ignition finished successfully May 14 18:02:05.719572 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 18:02:05.719886 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 18:02:05.723232 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 18:02:05.727212 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:02:05.727294 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:02:05.727948 systemd[1]: Reached target basic.target - Basic System. May 14 18:02:05.733835 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 18:02:05.770846 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 18:02:05.778341 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 18:02:05.783418 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 18:02:05.914416 kernel: EXT4-fs (vda9): mounted filesystem d6072e19-4548-4806-a012-87bb17c59f4c r/w with ordered data mode. Quota mode: none. May 14 18:02:05.915500 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 18:02:05.917468 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 18:02:05.920191 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:02:05.921112 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 18:02:05.922971 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 18:02:05.923010 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 18:02:05.923034 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:02:05.942993 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 18:02:05.946548 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 18:02:05.951285 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (888) May 14 18:02:05.951311 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:02:05.951321 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:02:05.951331 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:02:05.954214 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:02:05.990850 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory May 14 18:02:05.996031 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory May 14 18:02:06.000748 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory May 14 18:02:06.005725 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory May 14 18:02:06.100793 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 18:02:06.102143 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 18:02:06.105595 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 18:02:06.121388 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:02:06.135581 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 18:02:06.151790 ignition[1002]: INFO : Ignition 2.21.0 May 14 18:02:06.151790 ignition[1002]: INFO : Stage: mount May 14 18:02:06.153674 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:02:06.153674 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:02:06.153674 ignition[1002]: INFO : mount: mount passed May 14 18:02:06.153674 ignition[1002]: INFO : Ignition finished successfully May 14 18:02:06.156243 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 18:02:06.158765 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 18:02:06.382341 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 18:02:06.384063 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:02:06.417191 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1014) May 14 18:02:06.417257 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:02:06.417268 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:02:06.418292 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:02:06.424727 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:02:06.452145 ignition[1031]: INFO : Ignition 2.21.0 May 14 18:02:06.452145 ignition[1031]: INFO : Stage: files May 14 18:02:06.454383 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:02:06.454383 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:02:06.454383 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping May 14 18:02:06.458922 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 18:02:06.458922 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 18:02:06.458922 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 18:02:06.458922 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 18:02:06.458922 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 18:02:06.457857 unknown[1031]: wrote ssh authorized keys file for user: core May 14 18:02:06.469138 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:02:06.469138 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 18:02:06.538327 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 18:02:06.804572 systemd-networkd[857]: eth0: Gained IPv6LL May 14 18:02:07.124275 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:02:07.124275 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 18:02:07.128407 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 14 18:02:07.594154 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 18:02:07.708052 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 18:02:07.710256 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 18:02:07.710256 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 18:02:07.710256 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 18:02:07.710256 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 18:02:07.710256 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:02:07.710256 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:02:07.710256 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:02:07.710256 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:02:07.725216 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:02:07.725216 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:02:07.725216 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:02:07.725216 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:02:07.725216 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:02:07.725216 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 14 18:02:08.084531 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 18:02:11.547252 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:02:11.547252 ignition[1031]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 18:02:11.552122 ignition[1031]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:02:11.675388 ignition[1031]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:02:11.675388 ignition[1031]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 18:02:11.675388 ignition[1031]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 18:02:11.681279 ignition[1031]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:02:11.681279 ignition[1031]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:02:11.681279 ignition[1031]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 18:02:11.681279 ignition[1031]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 14 18:02:11.696436 ignition[1031]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:02:11.700477 ignition[1031]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:02:11.702329 ignition[1031]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 14 18:02:11.702329 ignition[1031]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 14 18:02:11.705434 ignition[1031]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 14 18:02:11.705434 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 18:02:11.705434 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 18:02:11.705434 ignition[1031]: INFO : files: files passed May 14 18:02:11.705434 ignition[1031]: INFO : Ignition finished successfully May 14 18:02:11.714303 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 18:02:11.716180 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 18:02:11.719135 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 18:02:11.733555 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 18:02:11.733700 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 18:02:11.738591 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory May 14 18:02:11.742743 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:02:11.744670 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 18:02:11.746481 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:02:11.749114 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:02:11.752191 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 18:02:11.754868 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 18:02:11.802931 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 18:02:11.803071 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 18:02:11.804758 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 18:02:11.808219 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 18:02:11.811777 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 18:02:11.813735 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 18:02:11.844317 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:02:11.848484 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 18:02:11.872336 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 18:02:11.872507 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:02:11.874699 systemd[1]: Stopped target timers.target - Timer Units. May 14 18:02:11.876851 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 18:02:11.876964 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:02:11.880558 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 18:02:11.881738 systemd[1]: Stopped target basic.target - Basic System. May 14 18:02:11.882073 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 18:02:11.882422 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:02:11.882928 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 18:02:11.883286 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 18:02:11.883826 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 18:02:11.884147 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:02:11.884503 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 18:02:11.884965 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 18:02:11.885298 systemd[1]: Stopped target swap.target - Swaps. May 14 18:02:11.885782 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 18:02:11.885889 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 18:02:11.906980 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 18:02:11.908199 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:02:11.909237 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 18:02:11.912319 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:02:11.914913 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 18:02:11.915075 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 18:02:11.917929 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 18:02:11.918096 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:02:11.921298 systemd[1]: Stopped target paths.target - Path Units. May 14 18:02:11.921423 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 18:02:11.926480 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:02:11.926649 systemd[1]: Stopped target slices.target - Slice Units. May 14 18:02:11.929788 systemd[1]: Stopped target sockets.target - Socket Units. May 14 18:02:11.931800 systemd[1]: iscsid.socket: Deactivated successfully. May 14 18:02:11.931898 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:02:11.933909 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 18:02:11.933992 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:02:11.938221 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 18:02:11.938408 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:02:11.941775 systemd[1]: ignition-files.service: Deactivated successfully. May 14 18:02:11.941912 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 18:02:11.947855 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 18:02:11.949060 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 18:02:11.949213 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:02:11.963272 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 18:02:11.963428 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 18:02:11.963592 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:02:11.964424 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 18:02:11.964602 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:02:11.971015 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 18:02:11.971138 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 18:02:11.978723 ignition[1087]: INFO : Ignition 2.21.0 May 14 18:02:11.978723 ignition[1087]: INFO : Stage: umount May 14 18:02:11.981770 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:02:11.981770 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:02:11.984556 ignition[1087]: INFO : umount: umount passed May 14 18:02:11.984556 ignition[1087]: INFO : Ignition finished successfully May 14 18:02:11.988775 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 18:02:11.988905 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 18:02:11.991360 systemd[1]: Stopped target network.target - Network. May 14 18:02:11.993517 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 18:02:11.993572 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 18:02:11.995759 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 18:02:11.995805 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 18:02:11.998084 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 18:02:11.998137 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 18:02:11.999385 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 18:02:11.999431 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 18:02:11.999786 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 18:02:12.000152 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 18:02:12.001778 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 18:02:12.002358 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 18:02:12.002494 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 18:02:12.008463 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 18:02:12.008584 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 18:02:12.013242 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 18:02:12.014980 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 18:02:12.015037 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 18:02:12.015860 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 18:02:12.015909 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:02:12.021596 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 18:02:12.028138 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 18:02:12.028306 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 18:02:12.031902 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 18:02:12.032092 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 18:02:12.033778 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 18:02:12.033821 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 18:02:12.037613 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 18:02:12.039073 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 18:02:12.039125 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:02:12.039448 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:02:12.039489 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:02:12.045403 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 18:02:12.045495 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 18:02:12.048627 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:02:12.052008 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:02:12.067040 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 18:02:12.072564 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:02:12.072933 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 18:02:12.072977 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 18:02:12.078542 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 18:02:12.078581 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:02:12.080535 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 18:02:12.080581 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 18:02:12.083572 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 18:02:12.083618 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 18:02:12.085528 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 18:02:12.085576 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:02:12.088667 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 18:02:12.090662 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 18:02:12.090715 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:02:12.095569 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 18:02:12.098292 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:02:12.101132 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:02:12.101223 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:12.105201 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 18:02:12.110563 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 18:02:12.118778 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 18:02:12.118915 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 18:02:12.122521 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 18:02:12.124865 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 18:02:12.150882 systemd[1]: Switching root. May 14 18:02:12.193909 systemd-journald[220]: Journal stopped May 14 18:02:13.306352 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 14 18:02:13.306482 kernel: SELinux: policy capability network_peer_controls=1 May 14 18:02:13.306503 kernel: SELinux: policy capability open_perms=1 May 14 18:02:13.306521 kernel: SELinux: policy capability extended_socket_class=1 May 14 18:02:13.306536 kernel: SELinux: policy capability always_check_network=0 May 14 18:02:13.306550 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 18:02:13.306564 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 18:02:13.306579 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 18:02:13.306593 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 18:02:13.306608 kernel: SELinux: policy capability userspace_initial_context=0 May 14 18:02:13.306641 kernel: audit: type=1403 audit(1747245732.447:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 18:02:13.306658 systemd[1]: Successfully loaded SELinux policy in 48.559ms. May 14 18:02:13.306690 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.883ms. May 14 18:02:13.306709 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:02:13.306725 systemd[1]: Detected virtualization kvm. May 14 18:02:13.306742 systemd[1]: Detected architecture x86-64. May 14 18:02:13.306758 systemd[1]: Detected first boot. May 14 18:02:13.306773 systemd[1]: Initializing machine ID from VM UUID. May 14 18:02:13.306788 zram_generator::config[1131]: No configuration found. May 14 18:02:13.306804 kernel: Guest personality initialized and is inactive May 14 18:02:13.306821 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 18:02:13.306835 kernel: Initialized host personality May 14 18:02:13.306850 kernel: NET: Registered PF_VSOCK protocol family May 14 18:02:13.306865 systemd[1]: Populated /etc with preset unit settings. May 14 18:02:13.306882 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 18:02:13.306904 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 18:02:13.306919 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 18:02:13.306935 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 18:02:13.306951 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 18:02:13.306970 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 18:02:13.306985 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 18:02:13.306999 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 18:02:13.307015 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 18:02:13.307031 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 18:02:13.307047 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 18:02:13.307063 systemd[1]: Created slice user.slice - User and Session Slice. May 14 18:02:13.307081 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:02:13.307097 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:02:13.307115 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 18:02:13.307132 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 18:02:13.307149 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 18:02:13.307165 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:02:13.307181 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 18:02:13.307196 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:02:13.307212 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:02:13.307230 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 18:02:13.307246 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 18:02:13.307261 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 18:02:13.307276 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 18:02:13.307292 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:02:13.307308 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:02:13.307324 systemd[1]: Reached target slices.target - Slice Units. May 14 18:02:13.307339 systemd[1]: Reached target swap.target - Swaps. May 14 18:02:13.307355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 18:02:13.307399 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 18:02:13.307418 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 18:02:13.307433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:02:13.307451 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:02:13.307467 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:02:13.307482 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 18:02:13.307497 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 18:02:13.307513 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 18:02:13.307529 systemd[1]: Mounting media.mount - External Media Directory... May 14 18:02:13.307547 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:13.307569 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 18:02:13.307584 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 18:02:13.307599 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 18:02:13.307631 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 18:02:13.307647 systemd[1]: Reached target machines.target - Containers. May 14 18:02:13.307664 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 18:02:13.307680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:02:13.307695 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:02:13.307714 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 18:02:13.307729 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:02:13.307745 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:02:13.307761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:02:13.307777 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 18:02:13.307792 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:02:13.307810 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 18:02:13.307827 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 18:02:13.307848 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 18:02:13.307864 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 18:02:13.307879 systemd[1]: Stopped systemd-fsck-usr.service. May 14 18:02:13.307895 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:02:13.307911 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:02:13.307927 kernel: loop: module loaded May 14 18:02:13.307943 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:02:13.307958 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:02:13.307974 kernel: fuse: init (API version 7.41) May 14 18:02:13.307992 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 18:02:13.308008 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 18:02:13.308024 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:02:13.308039 systemd[1]: verity-setup.service: Deactivated successfully. May 14 18:02:13.308054 systemd[1]: Stopped verity-setup.service. May 14 18:02:13.308073 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:13.308089 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 18:02:13.308105 kernel: ACPI: bus type drm_connector registered May 14 18:02:13.308120 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 18:02:13.308136 systemd[1]: Mounted media.mount - External Media Directory. May 14 18:02:13.308165 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 18:02:13.308203 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 18:02:13.308219 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 18:02:13.308261 systemd-journald[1206]: Collecting audit messages is disabled. May 14 18:02:13.308294 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 18:02:13.308310 systemd-journald[1206]: Journal started May 14 18:02:13.308346 systemd-journald[1206]: Runtime Journal (/run/log/journal/53c1912212a348f3804f7d9645d7ad13) is 6M, max 48.5M, 42.4M free. May 14 18:02:12.986707 systemd[1]: Queued start job for default target multi-user.target. May 14 18:02:13.013599 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 18:02:13.014138 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 18:02:13.311601 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:02:13.313419 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:02:13.315189 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 18:02:13.315460 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 18:02:13.317357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:02:13.317768 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:02:13.319540 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:02:13.319842 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:02:13.321504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:02:13.321797 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:02:13.324010 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 18:02:13.324311 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 18:02:13.326015 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:02:13.326274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:02:13.327856 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:02:13.329777 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:02:13.331561 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 18:02:13.333403 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 18:02:13.353002 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:02:13.356339 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 18:02:13.360501 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 18:02:13.361952 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 18:02:13.361992 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:02:13.364659 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 18:02:13.367788 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 18:02:13.369174 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:02:13.370684 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 18:02:13.374512 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 18:02:13.376009 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:02:13.378624 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 18:02:13.381498 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:02:13.382677 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:02:13.386217 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 18:02:13.390846 systemd-journald[1206]: Time spent on flushing to /var/log/journal/53c1912212a348f3804f7d9645d7ad13 is 18.608ms for 1067 entries. May 14 18:02:13.390846 systemd-journald[1206]: System Journal (/var/log/journal/53c1912212a348f3804f7d9645d7ad13) is 8M, max 195.6M, 187.6M free. May 14 18:02:13.425180 systemd-journald[1206]: Received client request to flush runtime journal. May 14 18:02:13.425232 kernel: loop0: detected capacity change from 0 to 113872 May 14 18:02:13.390498 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 18:02:13.396549 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:02:13.398359 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 18:02:13.399861 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 18:02:13.424690 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 18:02:13.426782 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 18:02:13.429009 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:02:13.434060 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 18:02:13.438472 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 18:02:13.447407 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 18:02:13.451971 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 18:02:13.459545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:02:13.471984 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 18:02:13.474649 kernel: loop1: detected capacity change from 0 to 205544 May 14 18:02:13.493469 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 14 18:02:13.493488 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 14 18:02:13.499840 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:02:13.506411 kernel: loop2: detected capacity change from 0 to 146240 May 14 18:02:13.544413 kernel: loop3: detected capacity change from 0 to 113872 May 14 18:02:13.555403 kernel: loop4: detected capacity change from 0 to 205544 May 14 18:02:13.566414 kernel: loop5: detected capacity change from 0 to 146240 May 14 18:02:13.579695 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 18:02:13.580254 (sd-merge)[1273]: Merged extensions into '/usr'. May 14 18:02:13.584692 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... May 14 18:02:13.584708 systemd[1]: Reloading... May 14 18:02:13.652399 zram_generator::config[1302]: No configuration found. May 14 18:02:13.739913 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 18:02:13.758072 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:02:13.839216 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 18:02:13.839434 systemd[1]: Reloading finished in 254 ms. May 14 18:02:13.866572 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 18:02:13.868268 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 18:02:13.892179 systemd[1]: Starting ensure-sysext.service... May 14 18:02:13.894338 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:02:13.905484 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... May 14 18:02:13.905501 systemd[1]: Reloading... May 14 18:02:13.915735 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 18:02:13.915777 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 18:02:13.916064 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 18:02:13.916322 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 18:02:13.917289 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 18:02:13.917579 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 14 18:02:13.917665 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 14 18:02:13.922576 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:02:13.922590 systemd-tmpfiles[1337]: Skipping /boot May 14 18:02:13.936103 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:02:13.936120 systemd-tmpfiles[1337]: Skipping /boot May 14 18:02:13.965409 zram_generator::config[1367]: No configuration found. May 14 18:02:14.057220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:02:14.137605 systemd[1]: Reloading finished in 231 ms. May 14 18:02:14.156847 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 18:02:14.182763 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:02:14.191628 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:02:14.194160 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 18:02:14.196681 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 18:02:14.207636 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:02:14.212108 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:02:14.215708 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 18:02:14.220701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:14.221329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:02:14.232489 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:02:14.236741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:02:14.241260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:02:14.242733 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:02:14.242878 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:02:14.243004 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:14.244954 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 18:02:14.248214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:02:14.248518 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:02:14.250901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:02:14.251138 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:02:14.253152 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:02:14.253435 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:02:14.264676 augenrules[1432]: No rules May 14 18:02:14.266060 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:02:14.266515 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:02:14.269480 systemd-udevd[1408]: Using default interface naming scheme 'v255'. May 14 18:02:14.271601 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 18:02:14.278508 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:14.280079 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:02:14.281523 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:02:14.282858 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:02:14.297720 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:02:14.300071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:02:14.302886 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:02:14.304234 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:02:14.304350 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:02:14.306053 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 18:02:14.309196 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 18:02:14.310450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:14.312738 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:02:14.315060 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 18:02:14.319038 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:02:14.319250 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:02:14.321210 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:02:14.321434 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:02:14.323144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:02:14.323347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:02:14.325340 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:02:14.325574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:02:14.331972 systemd[1]: Finished ensure-sysext.service. May 14 18:02:14.344616 augenrules[1442]: /sbin/augenrules: No change May 14 18:02:14.349580 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:02:14.351072 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:02:14.351143 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:02:14.354538 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 18:02:14.355906 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:02:14.360728 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 18:02:14.366420 augenrules[1501]: No rules May 14 18:02:14.371053 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:02:14.371563 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:02:14.416274 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 18:02:14.421335 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 18:02:14.477638 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:02:14.480611 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 18:02:14.492414 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 14 18:02:14.501068 kernel: mousedev: PS/2 mouse device common for all mice May 14 18:02:14.508038 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 18:02:14.509405 kernel: ACPI: button: Power Button [PWRF] May 14 18:02:14.534954 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 14 18:02:14.535232 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 18:02:14.535402 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 18:02:14.591460 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:02:14.603761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:02:14.604026 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:14.607759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:02:14.656860 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 18:02:14.658485 systemd[1]: Reached target time-set.target - System Time Set. May 14 18:02:14.683928 systemd-resolved[1406]: Positive Trust Anchors: May 14 18:02:14.683947 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:02:14.683986 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:02:14.687184 systemd-networkd[1492]: lo: Link UP May 14 18:02:14.687199 systemd-networkd[1492]: lo: Gained carrier May 14 18:02:14.690458 systemd-resolved[1406]: Defaulting to hostname 'linux'. May 14 18:02:14.690758 kernel: kvm_amd: TSC scaling supported May 14 18:02:14.690793 kernel: kvm_amd: Nested Virtualization enabled May 14 18:02:14.690806 kernel: kvm_amd: Nested Paging enabled May 14 18:02:14.690818 kernel: kvm_amd: LBR virtualization supported May 14 18:02:14.691998 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 14 18:02:14.692016 kernel: kvm_amd: Virtual GIF supported May 14 18:02:14.692303 systemd-networkd[1492]: Enumeration completed May 14 18:02:14.692404 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:02:14.692702 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:02:14.692711 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:02:14.693957 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:02:14.694425 systemd-networkd[1492]: eth0: Link UP May 14 18:02:14.694588 systemd-networkd[1492]: eth0: Gained carrier May 14 18:02:14.694606 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:02:14.695326 systemd[1]: Reached target network.target - Network. May 14 18:02:14.696635 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:02:14.700803 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 18:02:14.705692 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 18:02:14.719427 systemd-networkd[1492]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:02:14.719980 systemd-timesyncd[1499]: Network configuration changed, trying to establish connection. May 14 18:02:15.886461 systemd-resolved[1406]: Clock change detected. Flushing caches. May 14 18:02:15.886548 systemd-timesyncd[1499]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 18:02:15.886632 systemd-timesyncd[1499]: Initial clock synchronization to Wed 2025-05-14 18:02:15.886426 UTC. May 14 18:02:15.902350 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 18:02:15.908205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:15.910443 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:02:15.920728 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 18:02:15.922339 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 18:02:15.923629 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 14 18:02:15.925034 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 18:02:15.926243 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 18:02:15.927555 kernel: EDAC MC: Ver: 3.0.0 May 14 18:02:15.928140 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 18:02:15.929499 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 18:02:15.929548 systemd[1]: Reached target paths.target - Path Units. May 14 18:02:15.930613 systemd[1]: Reached target timers.target - Timer Units. May 14 18:02:15.932261 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 18:02:15.935028 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 18:02:15.938776 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 18:02:15.940196 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 18:02:15.941675 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 18:02:15.944903 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 18:02:15.946550 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 18:02:15.948292 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 18:02:15.950038 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:02:15.951026 systemd[1]: Reached target basic.target - Basic System. May 14 18:02:15.952025 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 18:02:15.952053 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 18:02:15.953069 systemd[1]: Starting containerd.service - containerd container runtime... May 14 18:02:15.955084 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 18:02:15.957779 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 18:02:15.966444 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 18:02:15.968979 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 18:02:15.970009 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:02:15.971193 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 14 18:02:15.973278 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 18:02:15.974261 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 18:02:15.976749 jq[1560]: false May 14 18:02:15.977178 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 18:02:15.981873 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 18:02:15.983617 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing passwd entry cache May 14 18:02:15.983921 oslogin_cache_refresh[1562]: Refreshing passwd entry cache May 14 18:02:15.986319 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 18:02:15.988224 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 18:02:15.988767 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 18:02:15.989706 systemd[1]: Starting update-engine.service - Update Engine... May 14 18:02:15.996235 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 18:02:15.999548 extend-filesystems[1561]: Found loop3 May 14 18:02:15.999548 extend-filesystems[1561]: Found loop4 May 14 18:02:15.999548 extend-filesystems[1561]: Found loop5 May 14 18:02:15.999548 extend-filesystems[1561]: Found sr0 May 14 18:02:15.999548 extend-filesystems[1561]: Found vda May 14 18:02:15.999548 extend-filesystems[1561]: Found vda1 May 14 18:02:15.999548 extend-filesystems[1561]: Found vda2 May 14 18:02:15.999548 extend-filesystems[1561]: Found vda3 May 14 18:02:15.999548 extend-filesystems[1561]: Found usr May 14 18:02:15.999548 extend-filesystems[1561]: Found vda4 May 14 18:02:15.999548 extend-filesystems[1561]: Found vda6 May 14 18:02:15.999548 extend-filesystems[1561]: Found vda7 May 14 18:02:15.999548 extend-filesystems[1561]: Found vda9 May 14 18:02:15.999548 extend-filesystems[1561]: Checking size of /dev/vda9 May 14 18:02:16.036396 update_engine[1571]: I20250514 18:02:16.030560 1571 main.cc:92] Flatcar Update Engine starting May 14 18:02:16.035092 oslogin_cache_refresh[1562]: Failure getting users, quitting May 14 18:02:16.036809 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting users, quitting May 14 18:02:16.036809 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:02:16.028330 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 18:02:16.035114 oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:02:16.037047 jq[1575]: true May 14 18:02:16.030230 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 18:02:16.030553 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 18:02:16.030959 systemd[1]: motdgen.service: Deactivated successfully. May 14 18:02:16.031216 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 18:02:16.033964 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 18:02:16.034248 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 18:02:16.045175 extend-filesystems[1561]: Resized partition /dev/vda9 May 14 18:02:16.060018 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 18:02:16.073551 jq[1583]: true May 14 18:02:16.073688 extend-filesystems[1596]: resize2fs 1.47.2 (1-Jan-2025) May 14 18:02:16.084637 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing group entry cache May 14 18:02:16.084632 oslogin_cache_refresh[1562]: Refreshing group entry cache May 14 18:02:16.094588 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting groups, quitting May 14 18:02:16.094588 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:02:16.094223 oslogin_cache_refresh[1562]: Failure getting groups, quitting May 14 18:02:16.094240 oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:02:16.100419 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 14 18:02:16.100824 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 14 18:02:16.128831 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 18:02:16.133103 systemd-logind[1568]: Watching system buttons on /dev/input/event2 (Power Button) May 14 18:02:16.133411 systemd-logind[1568]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 18:02:16.133593 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 18:02:16.134144 systemd-logind[1568]: New seat seat0. May 14 18:02:16.138037 systemd[1]: Started systemd-logind.service - User Login Management. May 14 18:02:16.139892 tar[1582]: linux-amd64/helm May 14 18:02:16.161095 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 18:02:16.176020 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 18:02:16.202823 systemd[1]: issuegen.service: Deactivated successfully. May 14 18:02:16.203085 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 18:02:16.206302 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 18:02:16.231793 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 18:02:16.237611 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 18:02:16.249302 update_engine[1571]: I20250514 18:02:16.239996 1571 update_check_scheduler.cc:74] Next update check in 2m58s May 14 18:02:16.236424 dbus-daemon[1558]: [system] SELinux support is enabled May 14 18:02:16.240291 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 18:02:16.242687 systemd[1]: Reached target getty.target - Login Prompts. May 14 18:02:16.244066 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 18:02:16.248368 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 18:02:16.248396 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 18:02:16.249775 dbus-daemon[1558]: [system] Successfully activated service 'org.freedesktop.systemd1' May 14 18:02:16.251625 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 18:02:16.251669 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 18:02:16.253497 systemd[1]: Started update-engine.service - Update Engine. May 14 18:02:16.257399 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 18:02:16.261552 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 18:02:16.284993 extend-filesystems[1596]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 18:02:16.284993 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 18:02:16.284993 extend-filesystems[1596]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 18:02:16.290395 extend-filesystems[1561]: Resized filesystem in /dev/vda9 May 14 18:02:16.286841 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 18:02:16.287159 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 18:02:16.293135 bash[1628]: Updated "/home/core/.ssh/authorized_keys" May 14 18:02:16.297326 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 18:02:16.300319 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 18:02:16.309622 locksmithd[1632]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 18:02:16.406476 containerd[1592]: time="2025-05-14T18:02:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 18:02:16.409434 containerd[1592]: time="2025-05-14T18:02:16.409395579Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 18:02:16.419397 containerd[1592]: time="2025-05-14T18:02:16.419326403Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.827µs" May 14 18:02:16.419397 containerd[1592]: time="2025-05-14T18:02:16.419372139Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 18:02:16.419397 containerd[1592]: time="2025-05-14T18:02:16.419395072Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 18:02:16.419674 containerd[1592]: time="2025-05-14T18:02:16.419643518Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 18:02:16.419674 containerd[1592]: time="2025-05-14T18:02:16.419666742Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 18:02:16.419758 containerd[1592]: time="2025-05-14T18:02:16.419703170Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:02:16.419820 containerd[1592]: time="2025-05-14T18:02:16.419794782Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:02:16.419820 containerd[1592]: time="2025-05-14T18:02:16.419814199Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:02:16.420187 containerd[1592]: time="2025-05-14T18:02:16.420153916Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:02:16.420187 containerd[1592]: time="2025-05-14T18:02:16.420174625Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:02:16.420242 containerd[1592]: time="2025-05-14T18:02:16.420187179Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:02:16.420242 containerd[1592]: time="2025-05-14T18:02:16.420197728Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 18:02:16.420332 containerd[1592]: time="2025-05-14T18:02:16.420303898Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 18:02:16.420655 containerd[1592]: time="2025-05-14T18:02:16.420626563Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:02:16.420701 containerd[1592]: time="2025-05-14T18:02:16.420668832Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:02:16.420701 containerd[1592]: time="2025-05-14T18:02:16.420680574Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 18:02:16.420980 containerd[1592]: time="2025-05-14T18:02:16.420949629Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 18:02:16.421239 containerd[1592]: time="2025-05-14T18:02:16.421210759Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 18:02:16.421315 containerd[1592]: time="2025-05-14T18:02:16.421293975Z" level=info msg="metadata content store policy set" policy=shared May 14 18:02:16.427291 containerd[1592]: time="2025-05-14T18:02:16.427123444Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 18:02:16.427291 containerd[1592]: time="2025-05-14T18:02:16.427175281Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 18:02:16.427291 containerd[1592]: time="2025-05-14T18:02:16.427191031Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 18:02:16.427291 containerd[1592]: time="2025-05-14T18:02:16.427204636Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 18:02:16.427291 containerd[1592]: time="2025-05-14T18:02:16.427218232Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 18:02:16.427291 containerd[1592]: time="2025-05-14T18:02:16.427231396Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 18:02:16.427291 containerd[1592]: time="2025-05-14T18:02:16.427246745Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 18:02:16.427291 containerd[1592]: time="2025-05-14T18:02:16.427261052Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 18:02:16.427291 containerd[1592]: time="2025-05-14T18:02:16.427273375Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 18:02:16.427291 containerd[1592]: time="2025-05-14T18:02:16.427285799Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 18:02:16.427291 containerd[1592]: time="2025-05-14T18:02:16.427297450Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 18:02:16.427291 containerd[1592]: time="2025-05-14T18:02:16.427313400Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427447943Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427470315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427487287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427511161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427542100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427555234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427568399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427586022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427599407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427612932Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427625827Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427691800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427707059Z" level=info msg="Start snapshots syncer" May 14 18:02:16.427765 containerd[1592]: time="2025-05-14T18:02:16.427738558Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 18:02:16.430038 containerd[1592]: time="2025-05-14T18:02:16.429979673Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 18:02:16.430038 containerd[1592]: time="2025-05-14T18:02:16.430047120Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 18:02:16.431051 containerd[1592]: time="2025-05-14T18:02:16.431018383Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 18:02:16.431184 containerd[1592]: time="2025-05-14T18:02:16.431156261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 18:02:16.431223 containerd[1592]: time="2025-05-14T18:02:16.431194253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 18:02:16.431223 containerd[1592]: time="2025-05-14T18:02:16.431208930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 18:02:16.431261 containerd[1592]: time="2025-05-14T18:02:16.431220943Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 18:02:16.431261 containerd[1592]: time="2025-05-14T18:02:16.431235981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 18:02:16.431261 containerd[1592]: time="2025-05-14T18:02:16.431248885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 18:02:16.431313 containerd[1592]: time="2025-05-14T18:02:16.431261739Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 18:02:16.431313 containerd[1592]: time="2025-05-14T18:02:16.431286896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 18:02:16.431313 containerd[1592]: time="2025-05-14T18:02:16.431301113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 18:02:16.431387 containerd[1592]: time="2025-05-14T18:02:16.431316843Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 18:02:16.432131 containerd[1592]: time="2025-05-14T18:02:16.432103018Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:02:16.432172 containerd[1592]: time="2025-05-14T18:02:16.432129658Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:02:16.432172 containerd[1592]: time="2025-05-14T18:02:16.432142301Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:02:16.432172 containerd[1592]: time="2025-05-14T18:02:16.432154174Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:02:16.432172 containerd[1592]: time="2025-05-14T18:02:16.432165535Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 18:02:16.432254 containerd[1592]: time="2025-05-14T18:02:16.432178489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 18:02:16.432254 containerd[1592]: time="2025-05-14T18:02:16.432192215Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 18:02:16.432254 containerd[1592]: time="2025-05-14T18:02:16.432214096Z" level=info msg="runtime interface created" May 14 18:02:16.432254 containerd[1592]: time="2025-05-14T18:02:16.432221169Z" level=info msg="created NRI interface" May 14 18:02:16.432254 containerd[1592]: time="2025-05-14T18:02:16.432236839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 18:02:16.432254 containerd[1592]: time="2025-05-14T18:02:16.432250945Z" level=info msg="Connect containerd service" May 14 18:02:16.432371 containerd[1592]: time="2025-05-14T18:02:16.432278808Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 18:02:16.433238 containerd[1592]: time="2025-05-14T18:02:16.433206338Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:02:16.533930 containerd[1592]: time="2025-05-14T18:02:16.533827897Z" level=info msg="Start subscribing containerd event" May 14 18:02:16.533930 containerd[1592]: time="2025-05-14T18:02:16.533900744Z" level=info msg="Start recovering state" May 14 18:02:16.533930 containerd[1592]: time="2025-05-14T18:02:16.533936321Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 18:02:16.534092 containerd[1592]: time="2025-05-14T18:02:16.533992446Z" level=info msg="Start event monitor" May 14 18:02:16.534092 containerd[1592]: time="2025-05-14T18:02:16.534005571Z" level=info msg="Start cni network conf syncer for default" May 14 18:02:16.534092 containerd[1592]: time="2025-05-14T18:02:16.533993528Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 18:02:16.534092 containerd[1592]: time="2025-05-14T18:02:16.534045566Z" level=info msg="Start streaming server" May 14 18:02:16.534092 containerd[1592]: time="2025-05-14T18:02:16.534055925Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 18:02:16.534092 containerd[1592]: time="2025-05-14T18:02:16.534064902Z" level=info msg="runtime interface starting up..." May 14 18:02:16.534092 containerd[1592]: time="2025-05-14T18:02:16.534073628Z" level=info msg="starting plugins..." May 14 18:02:16.534092 containerd[1592]: time="2025-05-14T18:02:16.534092223Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 18:02:16.534348 systemd[1]: Started containerd.service - containerd container runtime. May 14 18:02:16.536223 containerd[1592]: time="2025-05-14T18:02:16.536148832Z" level=info msg="containerd successfully booted in 0.130477s" May 14 18:02:16.562963 tar[1582]: linux-amd64/LICENSE May 14 18:02:16.562963 tar[1582]: linux-amd64/README.md May 14 18:02:16.585806 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 18:02:17.633880 systemd-networkd[1492]: eth0: Gained IPv6LL May 14 18:02:17.637602 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 18:02:17.639835 systemd[1]: Reached target network-online.target - Network is Online. May 14 18:02:17.643194 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 18:02:17.646043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:02:17.648681 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 18:02:17.680878 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 18:02:17.683339 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 18:02:17.683741 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 18:02:17.686978 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 18:02:17.820005 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 18:02:17.822772 systemd[1]: Started sshd@0-10.0.0.42:22-10.0.0.1:55760.service - OpenSSH per-connection server daemon (10.0.0.1:55760). May 14 18:02:17.890068 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 55760 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:17.892743 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:17.900566 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 18:02:17.903300 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 18:02:17.913328 systemd-logind[1568]: New session 1 of user core. May 14 18:02:17.927258 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 18:02:17.932884 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 18:02:17.950117 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 18:02:17.953826 systemd-logind[1568]: New session c1 of user core. May 14 18:02:18.114217 systemd[1690]: Queued start job for default target default.target. May 14 18:02:18.125799 systemd[1690]: Created slice app.slice - User Application Slice. May 14 18:02:18.125825 systemd[1690]: Reached target paths.target - Paths. May 14 18:02:18.125866 systemd[1690]: Reached target timers.target - Timers. May 14 18:02:18.127313 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 18:02:18.140456 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 18:02:18.140642 systemd[1690]: Reached target sockets.target - Sockets. May 14 18:02:18.140704 systemd[1690]: Reached target basic.target - Basic System. May 14 18:02:18.140752 systemd[1690]: Reached target default.target - Main User Target. May 14 18:02:18.140792 systemd[1690]: Startup finished in 179ms. May 14 18:02:18.141202 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 18:02:18.144326 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 18:02:18.205483 systemd[1]: Started sshd@1-10.0.0.42:22-10.0.0.1:55770.service - OpenSSH per-connection server daemon (10.0.0.1:55770). May 14 18:02:18.254490 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 55770 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:18.256238 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:18.261360 systemd-logind[1568]: New session 2 of user core. May 14 18:02:18.271853 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 18:02:18.326271 sshd[1703]: Connection closed by 10.0.0.1 port 55770 May 14 18:02:18.326583 sshd-session[1701]: pam_unix(sshd:session): session closed for user core May 14 18:02:18.332401 systemd[1]: sshd@1-10.0.0.42:22-10.0.0.1:55770.service: Deactivated successfully. May 14 18:02:18.334904 systemd[1]: session-2.scope: Deactivated successfully. May 14 18:02:18.336555 systemd-logind[1568]: Session 2 logged out. Waiting for processes to exit. May 14 18:02:18.339599 systemd[1]: Started sshd@2-10.0.0.42:22-10.0.0.1:55784.service - OpenSSH per-connection server daemon (10.0.0.1:55784). May 14 18:02:18.341692 systemd-logind[1568]: Removed session 2. May 14 18:02:18.350178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:02:18.352038 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 18:02:18.353652 systemd[1]: Startup finished in 2.902s (kernel) + 9.799s (initrd) + 4.788s (userspace) = 17.490s. May 14 18:02:18.354469 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:02:18.383044 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 55784 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:18.384746 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:18.390018 systemd-logind[1568]: New session 3 of user core. May 14 18:02:18.399739 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 18:02:18.454845 sshd[1721]: Connection closed by 10.0.0.1 port 55784 May 14 18:02:18.455282 sshd-session[1713]: pam_unix(sshd:session): session closed for user core May 14 18:02:18.459153 systemd[1]: sshd@2-10.0.0.42:22-10.0.0.1:55784.service: Deactivated successfully. May 14 18:02:18.461053 systemd[1]: session-3.scope: Deactivated successfully. May 14 18:02:18.461954 systemd-logind[1568]: Session 3 logged out. Waiting for processes to exit. May 14 18:02:18.463426 systemd-logind[1568]: Removed session 3. May 14 18:02:18.770079 kubelet[1715]: E0514 18:02:18.769890 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:02:18.773834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:02:18.774036 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:02:18.774465 systemd[1]: kubelet.service: Consumed 942ms CPU time, 235.7M memory peak. May 14 18:02:28.480874 systemd[1]: Started sshd@3-10.0.0.42:22-10.0.0.1:46066.service - OpenSSH per-connection server daemon (10.0.0.1:46066). May 14 18:02:28.529621 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 46066 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:28.531207 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:28.535399 systemd-logind[1568]: New session 4 of user core. May 14 18:02:28.544658 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 18:02:28.598081 sshd[1737]: Connection closed by 10.0.0.1 port 46066 May 14 18:02:28.598504 sshd-session[1735]: pam_unix(sshd:session): session closed for user core May 14 18:02:28.611334 systemd[1]: sshd@3-10.0.0.42:22-10.0.0.1:46066.service: Deactivated successfully. May 14 18:02:28.613003 systemd[1]: session-4.scope: Deactivated successfully. May 14 18:02:28.613791 systemd-logind[1568]: Session 4 logged out. Waiting for processes to exit. May 14 18:02:28.616605 systemd[1]: Started sshd@4-10.0.0.42:22-10.0.0.1:46078.service - OpenSSH per-connection server daemon (10.0.0.1:46078). May 14 18:02:28.617227 systemd-logind[1568]: Removed session 4. May 14 18:02:28.667598 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 46078 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:28.668986 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:28.673228 systemd-logind[1568]: New session 5 of user core. May 14 18:02:28.683654 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 18:02:28.733010 sshd[1745]: Connection closed by 10.0.0.1 port 46078 May 14 18:02:28.733264 sshd-session[1743]: pam_unix(sshd:session): session closed for user core May 14 18:02:28.748146 systemd[1]: sshd@4-10.0.0.42:22-10.0.0.1:46078.service: Deactivated successfully. May 14 18:02:28.750103 systemd[1]: session-5.scope: Deactivated successfully. May 14 18:02:28.750843 systemd-logind[1568]: Session 5 logged out. Waiting for processes to exit. May 14 18:02:28.753709 systemd[1]: Started sshd@5-10.0.0.42:22-10.0.0.1:46086.service - OpenSSH per-connection server daemon (10.0.0.1:46086). May 14 18:02:28.754389 systemd-logind[1568]: Removed session 5. May 14 18:02:28.794248 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 18:02:28.796157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:02:28.802395 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 46086 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:28.804138 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:28.808921 systemd-logind[1568]: New session 6 of user core. May 14 18:02:28.811500 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 18:02:28.866432 sshd[1756]: Connection closed by 10.0.0.1 port 46086 May 14 18:02:28.866976 sshd-session[1751]: pam_unix(sshd:session): session closed for user core May 14 18:02:28.880364 systemd[1]: sshd@5-10.0.0.42:22-10.0.0.1:46086.service: Deactivated successfully. May 14 18:02:28.882217 systemd[1]: session-6.scope: Deactivated successfully. May 14 18:02:28.882944 systemd-logind[1568]: Session 6 logged out. Waiting for processes to exit. May 14 18:02:28.886379 systemd[1]: Started sshd@6-10.0.0.42:22-10.0.0.1:46096.service - OpenSSH per-connection server daemon (10.0.0.1:46096). May 14 18:02:28.887121 systemd-logind[1568]: Removed session 6. May 14 18:02:28.931366 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 46096 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:28.932893 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:28.938142 systemd-logind[1568]: New session 7 of user core. May 14 18:02:28.953644 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 18:02:28.964388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:02:28.969090 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:02:29.004435 kubelet[1770]: E0514 18:02:29.004268 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:02:29.011497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:02:29.011711 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:02:29.011940 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 18:02:29.012257 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:02:29.012080 systemd[1]: kubelet.service: Consumed 194ms CPU time, 96M memory peak. May 14 18:02:29.026810 sudo[1777]: pam_unix(sudo:session): session closed for user root May 14 18:02:29.028621 sshd[1766]: Connection closed by 10.0.0.1 port 46096 May 14 18:02:29.029023 sshd-session[1762]: pam_unix(sshd:session): session closed for user core May 14 18:02:29.044129 systemd[1]: sshd@6-10.0.0.42:22-10.0.0.1:46096.service: Deactivated successfully. May 14 18:02:29.046016 systemd[1]: session-7.scope: Deactivated successfully. May 14 18:02:29.046736 systemd-logind[1568]: Session 7 logged out. Waiting for processes to exit. May 14 18:02:29.049813 systemd[1]: Started sshd@7-10.0.0.42:22-10.0.0.1:46110.service - OpenSSH per-connection server daemon (10.0.0.1:46110). May 14 18:02:29.050416 systemd-logind[1568]: Removed session 7. May 14 18:02:29.109007 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 46110 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:29.110397 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:29.114704 systemd-logind[1568]: New session 8 of user core. May 14 18:02:29.124648 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:02:29.179222 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 18:02:29.179566 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:02:29.371651 sudo[1788]: pam_unix(sudo:session): session closed for user root May 14 18:02:29.378398 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 18:02:29.378836 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:02:29.389945 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:02:29.440715 augenrules[1810]: No rules May 14 18:02:29.442556 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:02:29.442824 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:02:29.444194 sudo[1787]: pam_unix(sudo:session): session closed for user root May 14 18:02:29.445734 sshd[1786]: Connection closed by 10.0.0.1 port 46110 May 14 18:02:29.445987 sshd-session[1784]: pam_unix(sshd:session): session closed for user core May 14 18:02:29.459137 systemd[1]: sshd@7-10.0.0.42:22-10.0.0.1:46110.service: Deactivated successfully. May 14 18:02:29.460812 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:02:29.461537 systemd-logind[1568]: Session 8 logged out. Waiting for processes to exit. May 14 18:02:29.464339 systemd[1]: Started sshd@8-10.0.0.42:22-10.0.0.1:46114.service - OpenSSH per-connection server daemon (10.0.0.1:46114). May 14 18:02:29.464878 systemd-logind[1568]: Removed session 8. May 14 18:02:29.508172 sshd[1819]: Accepted publickey for core from 10.0.0.1 port 46114 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:29.509566 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:29.514306 systemd-logind[1568]: New session 9 of user core. May 14 18:02:29.523659 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:02:29.576619 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 18:02:29.576911 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:02:29.891496 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 18:02:29.906839 (dockerd)[1843]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 18:02:30.129674 dockerd[1843]: time="2025-05-14T18:02:30.129605509Z" level=info msg="Starting up" May 14 18:02:30.131128 dockerd[1843]: time="2025-05-14T18:02:30.131100334Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 18:02:30.198957 dockerd[1843]: time="2025-05-14T18:02:30.198846785Z" level=info msg="Loading containers: start." May 14 18:02:30.209557 kernel: Initializing XFRM netlink socket May 14 18:02:30.462695 systemd-networkd[1492]: docker0: Link UP May 14 18:02:30.468385 dockerd[1843]: time="2025-05-14T18:02:30.468338878Z" level=info msg="Loading containers: done." May 14 18:02:30.481539 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4157242121-merged.mount: Deactivated successfully. May 14 18:02:30.482704 dockerd[1843]: time="2025-05-14T18:02:30.482657846Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 18:02:30.482788 dockerd[1843]: time="2025-05-14T18:02:30.482765739Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 18:02:30.482927 dockerd[1843]: time="2025-05-14T18:02:30.482901012Z" level=info msg="Initializing buildkit" May 14 18:02:30.513577 dockerd[1843]: time="2025-05-14T18:02:30.513516190Z" level=info msg="Completed buildkit initialization" May 14 18:02:30.517618 dockerd[1843]: time="2025-05-14T18:02:30.517569095Z" level=info msg="Daemon has completed initialization" May 14 18:02:30.517738 dockerd[1843]: time="2025-05-14T18:02:30.517683770Z" level=info msg="API listen on /run/docker.sock" May 14 18:02:30.517770 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 18:02:31.381443 containerd[1592]: time="2025-05-14T18:02:31.381398659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 18:02:32.122876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1338557383.mount: Deactivated successfully. May 14 18:02:33.232973 containerd[1592]: time="2025-05-14T18:02:33.232902773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:33.233787 containerd[1592]: time="2025-05-14T18:02:33.233730206Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 14 18:02:33.234873 containerd[1592]: time="2025-05-14T18:02:33.234833456Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:33.237244 containerd[1592]: time="2025-05-14T18:02:33.237207471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:33.238064 containerd[1592]: time="2025-05-14T18:02:33.237995209Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.856556344s" May 14 18:02:33.238064 containerd[1592]: time="2025-05-14T18:02:33.238040634Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 14 18:02:33.239656 containerd[1592]: time="2025-05-14T18:02:33.239628554Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 18:02:34.707875 containerd[1592]: time="2025-05-14T18:02:34.707804330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:34.708631 containerd[1592]: time="2025-05-14T18:02:34.708576129Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 14 18:02:34.709687 containerd[1592]: time="2025-05-14T18:02:34.709633773Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:34.712159 containerd[1592]: time="2025-05-14T18:02:34.712110881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:34.713039 containerd[1592]: time="2025-05-14T18:02:34.713008916Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.473353201s" May 14 18:02:34.713039 containerd[1592]: time="2025-05-14T18:02:34.713038522Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 14 18:02:34.713586 containerd[1592]: time="2025-05-14T18:02:34.713562896Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 18:02:36.266228 containerd[1592]: time="2025-05-14T18:02:36.266146588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:36.269327 containerd[1592]: time="2025-05-14T18:02:36.269238470Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 14 18:02:36.271639 containerd[1592]: time="2025-05-14T18:02:36.271598468Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:36.285197 containerd[1592]: time="2025-05-14T18:02:36.285135910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:36.286011 containerd[1592]: time="2025-05-14T18:02:36.285972901Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.572379878s" May 14 18:02:36.286011 containerd[1592]: time="2025-05-14T18:02:36.286009579Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 14 18:02:36.286507 containerd[1592]: time="2025-05-14T18:02:36.286478058Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 18:02:37.229745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928347156.mount: Deactivated successfully. May 14 18:02:37.843266 containerd[1592]: time="2025-05-14T18:02:37.843188534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:37.843967 containerd[1592]: time="2025-05-14T18:02:37.843936217Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 14 18:02:37.845809 containerd[1592]: time="2025-05-14T18:02:37.845770329Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:37.848132 containerd[1592]: time="2025-05-14T18:02:37.848076686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:37.848750 containerd[1592]: time="2025-05-14T18:02:37.848687122Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.562181922s" May 14 18:02:37.850555 containerd[1592]: time="2025-05-14T18:02:37.848745812Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 14 18:02:37.851164 containerd[1592]: time="2025-05-14T18:02:37.851106862Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:02:38.652721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374118929.mount: Deactivated successfully. May 14 18:02:39.262441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 18:02:39.264790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:02:39.458685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:02:39.468841 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:02:39.749688 containerd[1592]: time="2025-05-14T18:02:39.749546658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:39.750653 containerd[1592]: time="2025-05-14T18:02:39.750631404Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 18:02:39.751584 containerd[1592]: time="2025-05-14T18:02:39.751550358Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:39.754422 containerd[1592]: time="2025-05-14T18:02:39.754339782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:39.755510 containerd[1592]: time="2025-05-14T18:02:39.755474331Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.904331151s" May 14 18:02:39.755510 containerd[1592]: time="2025-05-14T18:02:39.755504227Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 18:02:39.756233 containerd[1592]: time="2025-05-14T18:02:39.756210593Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 18:02:39.779738 kubelet[2175]: E0514 18:02:39.779669 2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:02:39.784839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:02:39.785060 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:02:39.785475 systemd[1]: kubelet.service: Consumed 304ms CPU time, 96.1M memory peak. May 14 18:02:40.519599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2157119580.mount: Deactivated successfully. May 14 18:02:40.525670 containerd[1592]: time="2025-05-14T18:02:40.525623541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:02:40.527491 containerd[1592]: time="2025-05-14T18:02:40.527457983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 18:02:40.534774 containerd[1592]: time="2025-05-14T18:02:40.534737913Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:02:40.537143 containerd[1592]: time="2025-05-14T18:02:40.537071262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:02:40.537832 containerd[1592]: time="2025-05-14T18:02:40.537791233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 781.552858ms" May 14 18:02:40.537873 containerd[1592]: time="2025-05-14T18:02:40.537827581Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 18:02:40.538488 containerd[1592]: time="2025-05-14T18:02:40.538273738Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 18:02:41.421831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2956469959.mount: Deactivated successfully. May 14 18:02:43.656958 containerd[1592]: time="2025-05-14T18:02:43.656880600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:43.657900 containerd[1592]: time="2025-05-14T18:02:43.657836684Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 14 18:02:43.659573 containerd[1592]: time="2025-05-14T18:02:43.659511226Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:43.662912 containerd[1592]: time="2025-05-14T18:02:43.662864909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:43.664132 containerd[1592]: time="2025-05-14T18:02:43.664084137Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.125783348s" May 14 18:02:43.664171 containerd[1592]: time="2025-05-14T18:02:43.664136125Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 14 18:02:46.170251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:02:46.170477 systemd[1]: kubelet.service: Consumed 304ms CPU time, 96.1M memory peak. May 14 18:02:46.173471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:02:46.201163 systemd[1]: Reload requested from client PID 2271 ('systemctl') (unit session-9.scope)... May 14 18:02:46.201181 systemd[1]: Reloading... May 14 18:02:46.300576 zram_generator::config[2313]: No configuration found. May 14 18:02:46.441948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:02:46.588559 systemd[1]: Reloading finished in 386 ms. May 14 18:02:46.659481 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 18:02:46.659633 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 18:02:46.659939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:02:46.659991 systemd[1]: kubelet.service: Consumed 152ms CPU time, 83.6M memory peak. May 14 18:02:46.661891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:02:46.822951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:02:46.834948 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:02:46.916041 kubelet[2361]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:02:46.916041 kubelet[2361]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:02:46.916041 kubelet[2361]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:02:46.916603 kubelet[2361]: I0514 18:02:46.916083 2361 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:02:47.407533 kubelet[2361]: I0514 18:02:47.407457 2361 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:02:47.407533 kubelet[2361]: I0514 18:02:47.407495 2361 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:02:47.407824 kubelet[2361]: I0514 18:02:47.407798 2361 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:02:47.551739 kubelet[2361]: I0514 18:02:47.551658 2361 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:02:47.551950 kubelet[2361]: E0514 18:02:47.551905 2361 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 14 18:02:47.562808 kubelet[2361]: I0514 18:02:47.562768 2361 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:02:47.569792 kubelet[2361]: I0514 18:02:47.569732 2361 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:02:47.574515 kubelet[2361]: I0514 18:02:47.574458 2361 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:02:47.574778 kubelet[2361]: I0514 18:02:47.574716 2361 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:02:47.575035 kubelet[2361]: I0514 18:02:47.574762 2361 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:02:47.575159 kubelet[2361]: I0514 18:02:47.575050 2361 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:02:47.575159 kubelet[2361]: I0514 18:02:47.575064 2361 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:02:47.575252 kubelet[2361]: I0514 18:02:47.575231 2361 state_mem.go:36] "Initialized new in-memory state store" May 14 18:02:47.579614 kubelet[2361]: I0514 18:02:47.579549 2361 kubelet.go:408] "Attempting to sync node with API server" May 14 18:02:47.579614 kubelet[2361]: I0514 18:02:47.579602 2361 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:02:47.579743 kubelet[2361]: I0514 18:02:47.579650 2361 kubelet.go:314] "Adding apiserver pod source" May 14 18:02:47.579743 kubelet[2361]: I0514 18:02:47.579671 2361 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:02:47.580545 kubelet[2361]: W0514 18:02:47.580456 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 14 18:02:47.580597 kubelet[2361]: E0514 18:02:47.580559 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 14 18:02:47.584400 kubelet[2361]: W0514 18:02:47.584338 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 14 18:02:47.584457 kubelet[2361]: E0514 18:02:47.584406 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 14 18:02:47.591331 kubelet[2361]: I0514 18:02:47.591297 2361 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:02:47.600606 kubelet[2361]: I0514 18:02:47.600567 2361 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:02:47.601402 kubelet[2361]: W0514 18:02:47.601369 2361 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:02:47.602173 kubelet[2361]: I0514 18:02:47.602153 2361 server.go:1269] "Started kubelet" May 14 18:02:47.603583 kubelet[2361]: I0514 18:02:47.603517 2361 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:02:47.608352 kubelet[2361]: I0514 18:02:47.608240 2361 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:02:47.608352 kubelet[2361]: I0514 18:02:47.608336 2361 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:02:47.608579 kubelet[2361]: I0514 18:02:47.608404 2361 reconciler.go:26] "Reconciler: start to sync state" May 14 18:02:47.608883 kubelet[2361]: W0514 18:02:47.608705 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 14 18:02:47.608883 kubelet[2361]: E0514 18:02:47.608753 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 14 18:02:47.609946 kubelet[2361]: I0514 18:02:47.609907 2361 factory.go:221] Registration of the systemd container factory successfully May 14 18:02:47.610090 kubelet[2361]: I0514 18:02:47.610057 2361 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:02:47.610452 kubelet[2361]: I0514 18:02:47.610384 2361 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:02:47.610666 kubelet[2361]: I0514 18:02:47.610606 2361 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:02:47.611349 kubelet[2361]: I0514 18:02:47.611320 2361 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:02:47.611789 kubelet[2361]: I0514 18:02:47.611708 2361 server.go:460] "Adding debug handlers to kubelet server" May 14 18:02:47.622408 kubelet[2361]: I0514 18:02:47.622381 2361 factory.go:221] Registration of the containerd container factory successfully May 14 18:02:47.623239 kubelet[2361]: I0514 18:02:47.623211 2361 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:02:47.623682 kubelet[2361]: E0514 18:02:47.623658 2361 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:02:47.631913 kubelet[2361]: E0514 18:02:47.631479 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:02:47.638479 kubelet[2361]: E0514 18:02:47.638423 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="200ms" May 14 18:02:47.641905 kubelet[2361]: E0514 18:02:47.636930 2361 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.42:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.42:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f76c4b13abab1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 18:02:47.602125489 +0000 UTC m=+0.740758902,LastTimestamp:2025-05-14 18:02:47.602125489 +0000 UTC m=+0.740758902,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 18:02:47.650720 kubelet[2361]: I0514 18:02:47.650689 2361 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:02:47.650720 kubelet[2361]: I0514 18:02:47.650709 2361 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:02:47.650720 kubelet[2361]: I0514 18:02:47.650726 2361 state_mem.go:36] "Initialized new in-memory state store" May 14 18:02:47.655233 kubelet[2361]: I0514 18:02:47.655051 2361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:02:47.656556 kubelet[2361]: I0514 18:02:47.656492 2361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:02:47.656617 kubelet[2361]: I0514 18:02:47.656575 2361 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:02:47.656876 kubelet[2361]: I0514 18:02:47.656755 2361 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:02:47.656876 kubelet[2361]: E0514 18:02:47.656812 2361 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:02:47.665358 kubelet[2361]: W0514 18:02:47.665187 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 14 18:02:47.665358 kubelet[2361]: E0514 18:02:47.665276 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 14 18:02:47.666402 kubelet[2361]: I0514 18:02:47.666365 2361 policy_none.go:49] "None policy: Start" May 14 18:02:47.667388 kubelet[2361]: I0514 18:02:47.667361 2361 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:02:47.667388 kubelet[2361]: I0514 18:02:47.667390 2361 state_mem.go:35] "Initializing new in-memory state store" May 14 18:02:47.682376 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:02:47.696796 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:02:47.714471 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:02:47.716093 kubelet[2361]: I0514 18:02:47.716047 2361 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:02:47.716357 kubelet[2361]: I0514 18:02:47.716340 2361 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:02:47.717168 kubelet[2361]: I0514 18:02:47.716363 2361 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:02:47.717168 kubelet[2361]: I0514 18:02:47.716670 2361 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:02:47.717822 kubelet[2361]: E0514 18:02:47.717802 2361 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 18:02:47.767480 systemd[1]: Created slice kubepods-burstable-podf8d4b84ffc65ebafc0ab849b1d0928f4.slice - libcontainer container kubepods-burstable-podf8d4b84ffc65ebafc0ab849b1d0928f4.slice. May 14 18:02:47.805506 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 14 18:02:47.809904 kubelet[2361]: I0514 18:02:47.809737 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:02:47.809904 kubelet[2361]: I0514 18:02:47.809904 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:02:47.810073 kubelet[2361]: I0514 18:02:47.809937 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:02:47.810073 kubelet[2361]: I0514 18:02:47.809962 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8d4b84ffc65ebafc0ab849b1d0928f4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f8d4b84ffc65ebafc0ab849b1d0928f4\") " pod="kube-system/kube-apiserver-localhost" May 14 18:02:47.810073 kubelet[2361]: I0514 18:02:47.809986 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:02:47.810073 kubelet[2361]: I0514 18:02:47.810008 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:02:47.810073 kubelet[2361]: I0514 18:02:47.810042 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 18:02:47.810218 kubelet[2361]: I0514 18:02:47.810064 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8d4b84ffc65ebafc0ab849b1d0928f4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f8d4b84ffc65ebafc0ab849b1d0928f4\") " pod="kube-system/kube-apiserver-localhost" May 14 18:02:47.810218 kubelet[2361]: I0514 18:02:47.810085 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8d4b84ffc65ebafc0ab849b1d0928f4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f8d4b84ffc65ebafc0ab849b1d0928f4\") " pod="kube-system/kube-apiserver-localhost" May 14 18:02:47.817673 kubelet[2361]: I0514 18:02:47.817639 2361 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:02:47.818274 kubelet[2361]: E0514 18:02:47.818235 2361 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 14 18:02:47.820107 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 14 18:02:47.839730 kubelet[2361]: E0514 18:02:47.839670 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="400ms" May 14 18:02:48.020313 kubelet[2361]: I0514 18:02:48.020195 2361 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:02:48.020726 kubelet[2361]: E0514 18:02:48.020669 2361 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 14 18:02:48.103841 kubelet[2361]: E0514 18:02:48.103789 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:48.104676 containerd[1592]: time="2025-05-14T18:02:48.104626055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f8d4b84ffc65ebafc0ab849b1d0928f4,Namespace:kube-system,Attempt:0,}" May 14 18:02:48.117488 kubelet[2361]: E0514 18:02:48.117436 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:48.118172 containerd[1592]: time="2025-05-14T18:02:48.118109977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 14 18:02:48.123003 kubelet[2361]: E0514 18:02:48.122961 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:48.123581 containerd[1592]: time="2025-05-14T18:02:48.123517744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 14 18:02:48.144308 containerd[1592]: time="2025-05-14T18:02:48.144240740Z" level=info msg="connecting to shim 6956349e52462e6714452c2e02540986a146ed92816dd6a6a22cb04037720863" address="unix:///run/containerd/s/bf9b08202793d7497f0c787c6b8041c92b17b5cfc1ab60568b298ab4aaeb186c" namespace=k8s.io protocol=ttrpc version=3 May 14 18:02:48.280638 kubelet[2361]: E0514 18:02:48.280464 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="800ms" May 14 18:02:48.294038 containerd[1592]: time="2025-05-14T18:02:48.293964636Z" level=info msg="connecting to shim c6ee29fbc6eb765d1aea9a36e7ab8d965763ff987311aaf7ca1fced8343d07ca" address="unix:///run/containerd/s/d19a9896fb08fa24009f4c7712b1363e5bcfd8a045e3194f0d6ed079e321ae40" namespace=k8s.io protocol=ttrpc version=3 May 14 18:02:48.298842 systemd[1]: Started cri-containerd-6956349e52462e6714452c2e02540986a146ed92816dd6a6a22cb04037720863.scope - libcontainer container 6956349e52462e6714452c2e02540986a146ed92816dd6a6a22cb04037720863. May 14 18:02:48.300476 containerd[1592]: time="2025-05-14T18:02:48.300313399Z" level=info msg="connecting to shim c19548d614229c1cf074be9360f4ca995a5d3339e5d8f0c4b1e2f250218050eb" address="unix:///run/containerd/s/7c5a5bea7b5cf1b2f0a8ba583f5779ebd8463a876c776e8983f7228afddcd337" namespace=k8s.io protocol=ttrpc version=3 May 14 18:02:48.346781 systemd[1]: Started cri-containerd-c6ee29fbc6eb765d1aea9a36e7ab8d965763ff987311aaf7ca1fced8343d07ca.scope - libcontainer container c6ee29fbc6eb765d1aea9a36e7ab8d965763ff987311aaf7ca1fced8343d07ca. May 14 18:02:48.362433 systemd[1]: Started cri-containerd-c19548d614229c1cf074be9360f4ca995a5d3339e5d8f0c4b1e2f250218050eb.scope - libcontainer container c19548d614229c1cf074be9360f4ca995a5d3339e5d8f0c4b1e2f250218050eb. May 14 18:02:48.389874 containerd[1592]: time="2025-05-14T18:02:48.389817958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f8d4b84ffc65ebafc0ab849b1d0928f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6956349e52462e6714452c2e02540986a146ed92816dd6a6a22cb04037720863\"" May 14 18:02:48.391944 kubelet[2361]: E0514 18:02:48.391909 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:48.395386 containerd[1592]: time="2025-05-14T18:02:48.395343716Z" level=info msg="CreateContainer within sandbox \"6956349e52462e6714452c2e02540986a146ed92816dd6a6a22cb04037720863\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:02:48.422358 kubelet[2361]: I0514 18:02:48.422322 2361 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:02:48.422840 kubelet[2361]: E0514 18:02:48.422805 2361 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 14 18:02:48.427861 containerd[1592]: time="2025-05-14T18:02:48.427811189Z" level=info msg="Container bb162b9dc208235e7ef942cac845b6e50d1c46e193ce8b064c47103ea6e8b444: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:48.459798 containerd[1592]: time="2025-05-14T18:02:48.459728129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c19548d614229c1cf074be9360f4ca995a5d3339e5d8f0c4b1e2f250218050eb\"" May 14 18:02:48.460928 kubelet[2361]: E0514 18:02:48.460856 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:48.463203 containerd[1592]: time="2025-05-14T18:02:48.463175708Z" level=info msg="CreateContainer within sandbox \"c19548d614229c1cf074be9360f4ca995a5d3339e5d8f0c4b1e2f250218050eb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:02:48.474884 containerd[1592]: time="2025-05-14T18:02:48.474841729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6ee29fbc6eb765d1aea9a36e7ab8d965763ff987311aaf7ca1fced8343d07ca\"" May 14 18:02:48.475555 kubelet[2361]: E0514 18:02:48.475510 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:48.477236 containerd[1592]: time="2025-05-14T18:02:48.477188121Z" level=info msg="CreateContainer within sandbox \"c6ee29fbc6eb765d1aea9a36e7ab8d965763ff987311aaf7ca1fced8343d07ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:02:48.529710 kubelet[2361]: W0514 18:02:48.529639 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 14 18:02:48.529710 kubelet[2361]: E0514 18:02:48.529710 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 14 18:02:48.587351 kubelet[2361]: W0514 18:02:48.587144 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 14 18:02:48.587351 kubelet[2361]: E0514 18:02:48.587222 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 14 18:02:48.618541 containerd[1592]: time="2025-05-14T18:02:48.618463044Z" level=info msg="CreateContainer within sandbox \"6956349e52462e6714452c2e02540986a146ed92816dd6a6a22cb04037720863\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bb162b9dc208235e7ef942cac845b6e50d1c46e193ce8b064c47103ea6e8b444\"" May 14 18:02:48.619301 containerd[1592]: time="2025-05-14T18:02:48.619275739Z" level=info msg="StartContainer for \"bb162b9dc208235e7ef942cac845b6e50d1c46e193ce8b064c47103ea6e8b444\"" May 14 18:02:48.620446 containerd[1592]: time="2025-05-14T18:02:48.620401721Z" level=info msg="connecting to shim bb162b9dc208235e7ef942cac845b6e50d1c46e193ce8b064c47103ea6e8b444" address="unix:///run/containerd/s/bf9b08202793d7497f0c787c6b8041c92b17b5cfc1ab60568b298ab4aaeb186c" protocol=ttrpc version=3 May 14 18:02:48.625260 containerd[1592]: time="2025-05-14T18:02:48.625215304Z" level=info msg="Container b6a7deb53bd40658610214dc1d34d829e470b7caf0d1dd5fad983cb6c7b48ed5: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:48.633845 containerd[1592]: time="2025-05-14T18:02:48.633782770Z" level=info msg="CreateContainer within sandbox \"c6ee29fbc6eb765d1aea9a36e7ab8d965763ff987311aaf7ca1fced8343d07ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b6a7deb53bd40658610214dc1d34d829e470b7caf0d1dd5fad983cb6c7b48ed5\"" May 14 18:02:48.634553 containerd[1592]: time="2025-05-14T18:02:48.634466283Z" level=info msg="StartContainer for \"b6a7deb53bd40658610214dc1d34d829e470b7caf0d1dd5fad983cb6c7b48ed5\"" May 14 18:02:48.635607 containerd[1592]: time="2025-05-14T18:02:48.635577388Z" level=info msg="connecting to shim b6a7deb53bd40658610214dc1d34d829e470b7caf0d1dd5fad983cb6c7b48ed5" address="unix:///run/containerd/s/d19a9896fb08fa24009f4c7712b1363e5bcfd8a045e3194f0d6ed079e321ae40" protocol=ttrpc version=3 May 14 18:02:48.648949 containerd[1592]: time="2025-05-14T18:02:48.648892833Z" level=info msg="Container 0e6e6173532cb2d40697cf427a5ae5e17cdffb5132025e747a5a7e9695069c2b: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:48.658919 containerd[1592]: time="2025-05-14T18:02:48.658872148Z" level=info msg="CreateContainer within sandbox \"c19548d614229c1cf074be9360f4ca995a5d3339e5d8f0c4b1e2f250218050eb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0e6e6173532cb2d40697cf427a5ae5e17cdffb5132025e747a5a7e9695069c2b\"" May 14 18:02:48.659277 containerd[1592]: time="2025-05-14T18:02:48.659260437Z" level=info msg="StartContainer for \"0e6e6173532cb2d40697cf427a5ae5e17cdffb5132025e747a5a7e9695069c2b\"" May 14 18:02:48.660177 containerd[1592]: time="2025-05-14T18:02:48.660158522Z" level=info msg="connecting to shim 0e6e6173532cb2d40697cf427a5ae5e17cdffb5132025e747a5a7e9695069c2b" address="unix:///run/containerd/s/7c5a5bea7b5cf1b2f0a8ba583f5779ebd8463a876c776e8983f7228afddcd337" protocol=ttrpc version=3 May 14 18:02:48.660973 systemd[1]: Started cri-containerd-b6a7deb53bd40658610214dc1d34d829e470b7caf0d1dd5fad983cb6c7b48ed5.scope - libcontainer container b6a7deb53bd40658610214dc1d34d829e470b7caf0d1dd5fad983cb6c7b48ed5. May 14 18:02:48.670957 systemd[1]: Started cri-containerd-bb162b9dc208235e7ef942cac845b6e50d1c46e193ce8b064c47103ea6e8b444.scope - libcontainer container bb162b9dc208235e7ef942cac845b6e50d1c46e193ce8b064c47103ea6e8b444. May 14 18:02:48.697664 systemd[1]: Started cri-containerd-0e6e6173532cb2d40697cf427a5ae5e17cdffb5132025e747a5a7e9695069c2b.scope - libcontainer container 0e6e6173532cb2d40697cf427a5ae5e17cdffb5132025e747a5a7e9695069c2b. May 14 18:02:48.767518 containerd[1592]: time="2025-05-14T18:02:48.766434582Z" level=info msg="StartContainer for \"bb162b9dc208235e7ef942cac845b6e50d1c46e193ce8b064c47103ea6e8b444\" returns successfully" May 14 18:02:48.769323 containerd[1592]: time="2025-05-14T18:02:48.769265784Z" level=info msg="StartContainer for \"b6a7deb53bd40658610214dc1d34d829e470b7caf0d1dd5fad983cb6c7b48ed5\" returns successfully" May 14 18:02:48.779122 containerd[1592]: time="2025-05-14T18:02:48.779063719Z" level=info msg="StartContainer for \"0e6e6173532cb2d40697cf427a5ae5e17cdffb5132025e747a5a7e9695069c2b\" returns successfully" May 14 18:02:49.224934 kubelet[2361]: I0514 18:02:49.224860 2361 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:02:49.686912 kubelet[2361]: E0514 18:02:49.686782 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:49.688646 kubelet[2361]: E0514 18:02:49.688623 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:49.689676 kubelet[2361]: E0514 18:02:49.689653 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:50.203210 kubelet[2361]: E0514 18:02:50.203161 2361 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 18:02:50.294483 kubelet[2361]: I0514 18:02:50.294414 2361 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 18:02:50.294483 kubelet[2361]: E0514 18:02:50.294461 2361 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 18:02:50.305716 kubelet[2361]: E0514 18:02:50.305669 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:02:50.405855 kubelet[2361]: E0514 18:02:50.405792 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:02:50.506717 kubelet[2361]: E0514 18:02:50.506519 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:02:50.606979 kubelet[2361]: E0514 18:02:50.606918 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:02:50.691768 kubelet[2361]: E0514 18:02:50.691735 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:50.691768 kubelet[2361]: E0514 18:02:50.691772 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:50.691768 kubelet[2361]: E0514 18:02:50.691788 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:50.708041 kubelet[2361]: E0514 18:02:50.707985 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:02:50.808857 kubelet[2361]: E0514 18:02:50.808735 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:02:50.909224 kubelet[2361]: E0514 18:02:50.909167 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:02:51.583246 kubelet[2361]: I0514 18:02:51.583202 2361 apiserver.go:52] "Watching apiserver" May 14 18:02:51.608479 kubelet[2361]: I0514 18:02:51.608443 2361 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:02:51.700056 kubelet[2361]: E0514 18:02:51.700001 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:52.290864 systemd[1]: Reload requested from client PID 2635 ('systemctl') (unit session-9.scope)... May 14 18:02:52.290881 systemd[1]: Reloading... May 14 18:02:52.384579 zram_generator::config[2681]: No configuration found. May 14 18:02:52.400452 kubelet[2361]: E0514 18:02:52.400409 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:52.488869 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:02:52.623061 systemd[1]: Reloading finished in 331 ms. May 14 18:02:52.660942 kubelet[2361]: I0514 18:02:52.660906 2361 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:02:52.661250 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:02:52.686997 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:02:52.687314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:02:52.687392 systemd[1]: kubelet.service: Consumed 1.081s CPU time, 117.9M memory peak. May 14 18:02:52.689473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:02:52.897424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:02:52.908880 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:02:52.984748 kubelet[2723]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:02:52.984748 kubelet[2723]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:02:52.984748 kubelet[2723]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:02:52.985237 kubelet[2723]: I0514 18:02:52.984786 2723 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:02:52.992344 kubelet[2723]: I0514 18:02:52.991411 2723 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:02:52.992344 kubelet[2723]: I0514 18:02:52.991443 2723 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:02:52.992344 kubelet[2723]: I0514 18:02:52.991846 2723 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:02:52.993712 kubelet[2723]: I0514 18:02:52.993682 2723 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:02:52.995620 kubelet[2723]: I0514 18:02:52.995580 2723 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:02:52.998941 kubelet[2723]: I0514 18:02:52.998921 2723 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:02:53.003486 kubelet[2723]: I0514 18:02:53.003447 2723 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:02:53.003653 kubelet[2723]: I0514 18:02:53.003634 2723 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:02:53.003823 kubelet[2723]: I0514 18:02:53.003782 2723 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:02:53.004028 kubelet[2723]: I0514 18:02:53.003816 2723 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:02:53.004142 kubelet[2723]: I0514 18:02:53.004031 2723 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:02:53.004142 kubelet[2723]: I0514 18:02:53.004040 2723 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:02:53.004142 kubelet[2723]: I0514 18:02:53.004083 2723 state_mem.go:36] "Initialized new in-memory state store" May 14 18:02:53.004235 kubelet[2723]: I0514 18:02:53.004203 2723 kubelet.go:408] "Attempting to sync node with API server" May 14 18:02:53.004235 kubelet[2723]: I0514 18:02:53.004215 2723 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:02:53.004295 kubelet[2723]: I0514 18:02:53.004245 2723 kubelet.go:314] "Adding apiserver pod source" May 14 18:02:53.004295 kubelet[2723]: I0514 18:02:53.004259 2723 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:02:53.005489 kubelet[2723]: I0514 18:02:53.005407 2723 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:02:53.006656 kubelet[2723]: I0514 18:02:53.006636 2723 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:02:53.007928 kubelet[2723]: I0514 18:02:53.007834 2723 server.go:1269] "Started kubelet" May 14 18:02:53.009175 kubelet[2723]: I0514 18:02:53.008002 2723 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:02:53.009913 kubelet[2723]: I0514 18:02:53.009867 2723 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:02:53.013884 kubelet[2723]: I0514 18:02:53.011126 2723 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:02:53.013884 kubelet[2723]: I0514 18:02:53.011427 2723 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:02:53.013884 kubelet[2723]: I0514 18:02:53.012003 2723 server.go:460] "Adding debug handlers to kubelet server" May 14 18:02:53.013884 kubelet[2723]: I0514 18:02:53.012310 2723 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:02:53.013884 kubelet[2723]: I0514 18:02:53.013402 2723 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:02:53.013884 kubelet[2723]: I0514 18:02:53.013466 2723 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:02:53.013884 kubelet[2723]: I0514 18:02:53.013601 2723 reconciler.go:26] "Reconciler: start to sync state" May 14 18:02:53.017806 kubelet[2723]: I0514 18:02:53.016076 2723 factory.go:221] Registration of the systemd container factory successfully May 14 18:02:53.017806 kubelet[2723]: I0514 18:02:53.016150 2723 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:02:53.019137 kubelet[2723]: I0514 18:02:53.018347 2723 factory.go:221] Registration of the containerd container factory successfully May 14 18:02:53.020882 kubelet[2723]: I0514 18:02:53.020847 2723 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:02:53.022203 kubelet[2723]: I0514 18:02:53.022182 2723 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:02:53.022261 kubelet[2723]: I0514 18:02:53.022215 2723 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:02:53.022261 kubelet[2723]: I0514 18:02:53.022234 2723 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:02:53.022317 kubelet[2723]: E0514 18:02:53.022271 2723 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:02:53.022556 kubelet[2723]: E0514 18:02:53.022515 2723 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:02:53.066041 kubelet[2723]: I0514 18:02:53.066003 2723 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:02:53.066041 kubelet[2723]: I0514 18:02:53.066026 2723 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:02:53.066255 kubelet[2723]: I0514 18:02:53.066070 2723 state_mem.go:36] "Initialized new in-memory state store" May 14 18:02:53.066310 kubelet[2723]: I0514 18:02:53.066282 2723 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:02:53.066342 kubelet[2723]: I0514 18:02:53.066301 2723 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:02:53.066342 kubelet[2723]: I0514 18:02:53.066326 2723 policy_none.go:49] "None policy: Start" May 14 18:02:53.067396 kubelet[2723]: I0514 18:02:53.067371 2723 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:02:53.067396 kubelet[2723]: I0514 18:02:53.067393 2723 state_mem.go:35] "Initializing new in-memory state store" May 14 18:02:53.067633 kubelet[2723]: I0514 18:02:53.067613 2723 state_mem.go:75] "Updated machine memory state" May 14 18:02:53.073592 kubelet[2723]: I0514 18:02:53.073539 2723 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:02:53.073955 kubelet[2723]: I0514 18:02:53.073818 2723 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:02:53.073955 kubelet[2723]: I0514 18:02:53.073833 2723 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:02:53.074177 kubelet[2723]: I0514 18:02:53.074152 2723 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:02:53.131338 kubelet[2723]: E0514 18:02:53.131247 2723 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 18:02:53.131502 kubelet[2723]: E0514 18:02:53.131375 2723 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 18:02:53.181996 kubelet[2723]: I0514 18:02:53.181858 2723 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:02:53.190152 kubelet[2723]: I0514 18:02:53.190091 2723 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 14 18:02:53.190306 kubelet[2723]: I0514 18:02:53.190200 2723 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 18:02:53.279698 sudo[2757]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 18:02:53.280028 sudo[2757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 18:02:53.314738 kubelet[2723]: I0514 18:02:53.314675 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8d4b84ffc65ebafc0ab849b1d0928f4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f8d4b84ffc65ebafc0ab849b1d0928f4\") " pod="kube-system/kube-apiserver-localhost" May 14 18:02:53.314855 kubelet[2723]: I0514 18:02:53.314729 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8d4b84ffc65ebafc0ab849b1d0928f4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f8d4b84ffc65ebafc0ab849b1d0928f4\") " pod="kube-system/kube-apiserver-localhost" May 14 18:02:53.314855 kubelet[2723]: I0514 18:02:53.314782 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:02:53.314855 kubelet[2723]: I0514 18:02:53.314805 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:02:53.314855 kubelet[2723]: I0514 18:02:53.314831 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:02:53.314855 kubelet[2723]: I0514 18:02:53.314850 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8d4b84ffc65ebafc0ab849b1d0928f4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f8d4b84ffc65ebafc0ab849b1d0928f4\") " pod="kube-system/kube-apiserver-localhost" May 14 18:02:53.314989 kubelet[2723]: I0514 18:02:53.314867 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:02:53.314989 kubelet[2723]: I0514 18:02:53.314886 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:02:53.314989 kubelet[2723]: I0514 18:02:53.314937 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 18:02:53.430649 kubelet[2723]: E0514 18:02:53.430605 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:53.431811 kubelet[2723]: E0514 18:02:53.431770 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:53.432092 kubelet[2723]: E0514 18:02:53.431995 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:53.972742 sudo[2757]: pam_unix(sudo:session): session closed for user root May 14 18:02:54.005563 kubelet[2723]: I0514 18:02:54.005505 2723 apiserver.go:52] "Watching apiserver" May 14 18:02:54.013929 kubelet[2723]: I0514 18:02:54.013897 2723 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:02:54.047553 kubelet[2723]: E0514 18:02:54.043240 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:54.055184 kubelet[2723]: E0514 18:02:54.055132 2723 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 18:02:54.055388 kubelet[2723]: E0514 18:02:54.055360 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:54.060583 kubelet[2723]: E0514 18:02:54.058826 2723 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 18:02:54.060583 kubelet[2723]: E0514 18:02:54.059149 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:54.094497 kubelet[2723]: I0514 18:02:54.094340 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.094321919 podStartE2EDuration="1.094321919s" podCreationTimestamp="2025-05-14 18:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:02:54.093676366 +0000 UTC m=+1.180817858" watchObservedRunningTime="2025-05-14 18:02:54.094321919 +0000 UTC m=+1.181463411" May 14 18:02:54.113642 kubelet[2723]: I0514 18:02:54.113567 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.113549041 podStartE2EDuration="3.113549041s" podCreationTimestamp="2025-05-14 18:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:02:54.106425951 +0000 UTC m=+1.193567443" watchObservedRunningTime="2025-05-14 18:02:54.113549041 +0000 UTC m=+1.200690533" May 14 18:02:54.122647 kubelet[2723]: I0514 18:02:54.122580 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.122559963 podStartE2EDuration="2.122559963s" podCreationTimestamp="2025-05-14 18:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:02:54.113638732 +0000 UTC m=+1.200780224" watchObservedRunningTime="2025-05-14 18:02:54.122559963 +0000 UTC m=+1.209701465" May 14 18:02:55.044225 kubelet[2723]: E0514 18:02:55.044156 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:55.044225 kubelet[2723]: E0514 18:02:55.044181 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:55.341741 sudo[1822]: pam_unix(sudo:session): session closed for user root May 14 18:02:55.343555 sshd[1821]: Connection closed by 10.0.0.1 port 46114 May 14 18:02:55.344106 sshd-session[1819]: pam_unix(sshd:session): session closed for user core May 14 18:02:55.348671 systemd[1]: sshd@8-10.0.0.42:22-10.0.0.1:46114.service: Deactivated successfully. May 14 18:02:55.351459 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:02:55.351762 systemd[1]: session-9.scope: Consumed 4.700s CPU time, 265.4M memory peak. May 14 18:02:55.353049 systemd-logind[1568]: Session 9 logged out. Waiting for processes to exit. May 14 18:02:55.354140 systemd-logind[1568]: Removed session 9. May 14 18:02:56.817285 kubelet[2723]: E0514 18:02:56.817215 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:56.897812 kubelet[2723]: E0514 18:02:56.897771 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:58.322369 kubelet[2723]: I0514 18:02:58.322329 2723 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:02:58.322880 containerd[1592]: time="2025-05-14T18:02:58.322737963Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:02:58.323200 kubelet[2723]: I0514 18:02:58.322907 2723 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:02:59.313292 systemd[1]: Created slice kubepods-besteffort-pod06db5cd4_9e25_4797_9170_e5132eafaecd.slice - libcontainer container kubepods-besteffort-pod06db5cd4_9e25_4797_9170_e5132eafaecd.slice. May 14 18:02:59.329948 systemd[1]: Created slice kubepods-burstable-podbb2f2220_829f_4115_b377_883fb2088506.slice - libcontainer container kubepods-burstable-podbb2f2220_829f_4115_b377_883fb2088506.slice. May 14 18:02:59.349952 kubelet[2723]: I0514 18:02:59.349905 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/06db5cd4-9e25-4797-9170-e5132eafaecd-kube-proxy\") pod \"kube-proxy-8klg4\" (UID: \"06db5cd4-9e25-4797-9170-e5132eafaecd\") " pod="kube-system/kube-proxy-8klg4" May 14 18:02:59.349952 kubelet[2723]: I0514 18:02:59.349950 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgdqj\" (UniqueName: \"kubernetes.io/projected/06db5cd4-9e25-4797-9170-e5132eafaecd-kube-api-access-wgdqj\") pod \"kube-proxy-8klg4\" (UID: \"06db5cd4-9e25-4797-9170-e5132eafaecd\") " pod="kube-system/kube-proxy-8klg4" May 14 18:02:59.350449 kubelet[2723]: I0514 18:02:59.349979 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xl4g\" (UniqueName: \"kubernetes.io/projected/bb2f2220-829f-4115-b377-883fb2088506-kube-api-access-4xl4g\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350449 kubelet[2723]: I0514 18:02:59.350004 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-bpf-maps\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350449 kubelet[2723]: I0514 18:02:59.350027 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb2f2220-829f-4115-b377-883fb2088506-clustermesh-secrets\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350449 kubelet[2723]: I0514 18:02:59.350045 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-cilium-cgroup\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350449 kubelet[2723]: I0514 18:02:59.350063 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-etc-cni-netd\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350449 kubelet[2723]: I0514 18:02:59.350127 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-xtables-lock\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350682 kubelet[2723]: I0514 18:02:59.350175 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-cilium-run\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350682 kubelet[2723]: I0514 18:02:59.350200 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-cni-path\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350682 kubelet[2723]: I0514 18:02:59.350220 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06db5cd4-9e25-4797-9170-e5132eafaecd-xtables-lock\") pod \"kube-proxy-8klg4\" (UID: \"06db5cd4-9e25-4797-9170-e5132eafaecd\") " pod="kube-system/kube-proxy-8klg4" May 14 18:02:59.350682 kubelet[2723]: I0514 18:02:59.350241 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb2f2220-829f-4115-b377-883fb2088506-cilium-config-path\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350682 kubelet[2723]: I0514 18:02:59.350267 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-hostproc\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350682 kubelet[2723]: I0514 18:02:59.350294 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-lib-modules\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350811 kubelet[2723]: I0514 18:02:59.350313 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb2f2220-829f-4115-b377-883fb2088506-hubble-tls\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350811 kubelet[2723]: I0514 18:02:59.350355 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-host-proc-sys-net\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350811 kubelet[2723]: I0514 18:02:59.350386 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-host-proc-sys-kernel\") pod \"cilium-pc6dq\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " pod="kube-system/cilium-pc6dq" May 14 18:02:59.350811 kubelet[2723]: I0514 18:02:59.350435 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06db5cd4-9e25-4797-9170-e5132eafaecd-lib-modules\") pod \"kube-proxy-8klg4\" (UID: \"06db5cd4-9e25-4797-9170-e5132eafaecd\") " pod="kube-system/kube-proxy-8klg4" May 14 18:02:59.452384 kubelet[2723]: I0514 18:02:59.452340 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61503865-77a4-44b7-94d8-f98e7eab9355-cilium-config-path\") pod \"cilium-operator-5d85765b45-74jjp\" (UID: \"61503865-77a4-44b7-94d8-f98e7eab9355\") " pod="kube-system/cilium-operator-5d85765b45-74jjp" May 14 18:02:59.452493 kubelet[2723]: I0514 18:02:59.452459 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w94b\" (UniqueName: \"kubernetes.io/projected/61503865-77a4-44b7-94d8-f98e7eab9355-kube-api-access-4w94b\") pod \"cilium-operator-5d85765b45-74jjp\" (UID: \"61503865-77a4-44b7-94d8-f98e7eab9355\") " pod="kube-system/cilium-operator-5d85765b45-74jjp" May 14 18:02:59.454266 systemd[1]: Created slice kubepods-besteffort-pod61503865_77a4_44b7_94d8_f98e7eab9355.slice - libcontainer container kubepods-besteffort-pod61503865_77a4_44b7_94d8_f98e7eab9355.slice. May 14 18:02:59.622084 kubelet[2723]: E0514 18:02:59.621894 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:59.622832 containerd[1592]: time="2025-05-14T18:02:59.622783455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8klg4,Uid:06db5cd4-9e25-4797-9170-e5132eafaecd,Namespace:kube-system,Attempt:0,}" May 14 18:02:59.633360 kubelet[2723]: E0514 18:02:59.633317 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:59.633970 containerd[1592]: time="2025-05-14T18:02:59.633922651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pc6dq,Uid:bb2f2220-829f-4115-b377-883fb2088506,Namespace:kube-system,Attempt:0,}" May 14 18:02:59.668828 containerd[1592]: time="2025-05-14T18:02:59.668773148Z" level=info msg="connecting to shim fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e" address="unix:///run/containerd/s/990a13744628832c31e0767700b5400f0f20c4bef29b9f97e30805de843e3ecc" namespace=k8s.io protocol=ttrpc version=3 May 14 18:02:59.692077 containerd[1592]: time="2025-05-14T18:02:59.692016642Z" level=info msg="connecting to shim 6a81be0c2264efe101f88462bfea69e22a1d581094c1ea06cd938abfe02090e9" address="unix:///run/containerd/s/835a2626ab542a7edd98e97b22e07f79642bc08e7b152433efc6876cdc9863ea" namespace=k8s.io protocol=ttrpc version=3 May 14 18:02:59.706670 systemd[1]: Started cri-containerd-fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e.scope - libcontainer container fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e. May 14 18:02:59.711463 systemd[1]: Started cri-containerd-6a81be0c2264efe101f88462bfea69e22a1d581094c1ea06cd938abfe02090e9.scope - libcontainer container 6a81be0c2264efe101f88462bfea69e22a1d581094c1ea06cd938abfe02090e9. May 14 18:02:59.748505 containerd[1592]: time="2025-05-14T18:02:59.748431974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pc6dq,Uid:bb2f2220-829f-4115-b377-883fb2088506,Namespace:kube-system,Attempt:0,} returns sandbox id \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\"" May 14 18:02:59.749485 kubelet[2723]: E0514 18:02:59.749453 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:59.750916 containerd[1592]: time="2025-05-14T18:02:59.750880926Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 18:02:59.758393 kubelet[2723]: E0514 18:02:59.758353 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:59.758935 containerd[1592]: time="2025-05-14T18:02:59.758887601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-74jjp,Uid:61503865-77a4-44b7-94d8-f98e7eab9355,Namespace:kube-system,Attempt:0,}" May 14 18:02:59.790498 containerd[1592]: time="2025-05-14T18:02:59.790188688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8klg4,Uid:06db5cd4-9e25-4797-9170-e5132eafaecd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a81be0c2264efe101f88462bfea69e22a1d581094c1ea06cd938abfe02090e9\"" May 14 18:02:59.792102 kubelet[2723]: E0514 18:02:59.791545 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:59.799203 containerd[1592]: time="2025-05-14T18:02:59.799142323Z" level=info msg="CreateContainer within sandbox \"6a81be0c2264efe101f88462bfea69e22a1d581094c1ea06cd938abfe02090e9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:02:59.820598 containerd[1592]: time="2025-05-14T18:02:59.820533048Z" level=info msg="Container fd0702af38c6c284d4fb15e51724a011f9c38d97fbf1c8cd1260b4d0b527f5ac: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:59.826975 containerd[1592]: time="2025-05-14T18:02:59.826910059Z" level=info msg="connecting to shim 810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf" address="unix:///run/containerd/s/a48f7e210c5a5ed510c1f5a5478e5c7550f14fd1786c43f627bf9681453e4a2d" namespace=k8s.io protocol=ttrpc version=3 May 14 18:02:59.838584 containerd[1592]: time="2025-05-14T18:02:59.838498056Z" level=info msg="CreateContainer within sandbox \"6a81be0c2264efe101f88462bfea69e22a1d581094c1ea06cd938abfe02090e9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd0702af38c6c284d4fb15e51724a011f9c38d97fbf1c8cd1260b4d0b527f5ac\"" May 14 18:02:59.839545 containerd[1592]: time="2025-05-14T18:02:59.839471725Z" level=info msg="StartContainer for \"fd0702af38c6c284d4fb15e51724a011f9c38d97fbf1c8cd1260b4d0b527f5ac\"" May 14 18:02:59.841606 containerd[1592]: time="2025-05-14T18:02:59.841570772Z" level=info msg="connecting to shim fd0702af38c6c284d4fb15e51724a011f9c38d97fbf1c8cd1260b4d0b527f5ac" address="unix:///run/containerd/s/835a2626ab542a7edd98e97b22e07f79642bc08e7b152433efc6876cdc9863ea" protocol=ttrpc version=3 May 14 18:02:59.854892 systemd[1]: Started cri-containerd-810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf.scope - libcontainer container 810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf. May 14 18:02:59.864254 systemd[1]: Started cri-containerd-fd0702af38c6c284d4fb15e51724a011f9c38d97fbf1c8cd1260b4d0b527f5ac.scope - libcontainer container fd0702af38c6c284d4fb15e51724a011f9c38d97fbf1c8cd1260b4d0b527f5ac. May 14 18:02:59.920201 containerd[1592]: time="2025-05-14T18:02:59.920039909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-74jjp,Uid:61503865-77a4-44b7-94d8-f98e7eab9355,Namespace:kube-system,Attempt:0,} returns sandbox id \"810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf\"" May 14 18:02:59.922696 kubelet[2723]: E0514 18:02:59.922651 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:02:59.932632 containerd[1592]: time="2025-05-14T18:02:59.932583351Z" level=info msg="StartContainer for \"fd0702af38c6c284d4fb15e51724a011f9c38d97fbf1c8cd1260b4d0b527f5ac\" returns successfully" May 14 18:03:00.054896 kubelet[2723]: E0514 18:03:00.054693 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:00.065566 kubelet[2723]: I0514 18:03:00.065467 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8klg4" podStartSLOduration=1.065449959 podStartE2EDuration="1.065449959s" podCreationTimestamp="2025-05-14 18:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:03:00.065226164 +0000 UTC m=+7.152367656" watchObservedRunningTime="2025-05-14 18:03:00.065449959 +0000 UTC m=+7.152591461" May 14 18:03:01.865145 update_engine[1571]: I20250514 18:03:01.865033 1571 update_attempter.cc:509] Updating boot flags... May 14 18:03:02.642792 kubelet[2723]: E0514 18:03:02.642752 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:03.063744 kubelet[2723]: E0514 18:03:03.063708 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:06.822218 kubelet[2723]: E0514 18:03:06.822175 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:06.904463 kubelet[2723]: E0514 18:03:06.902728 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:07.488273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4268981963.mount: Deactivated successfully. May 14 18:03:10.101074 containerd[1592]: time="2025-05-14T18:03:10.100991350Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:10.102372 containerd[1592]: time="2025-05-14T18:03:10.102282035Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 14 18:03:10.104379 containerd[1592]: time="2025-05-14T18:03:10.104318930Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:10.106207 containerd[1592]: time="2025-05-14T18:03:10.106131592Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.355205711s" May 14 18:03:10.106207 containerd[1592]: time="2025-05-14T18:03:10.106196014Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 14 18:03:10.111994 containerd[1592]: time="2025-05-14T18:03:10.111705173Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 18:03:10.113285 containerd[1592]: time="2025-05-14T18:03:10.113240240Z" level=info msg="CreateContainer within sandbox \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:03:10.128175 containerd[1592]: time="2025-05-14T18:03:10.128129285Z" level=info msg="Container 9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:10.139260 containerd[1592]: time="2025-05-14T18:03:10.139201155Z" level=info msg="CreateContainer within sandbox \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\"" May 14 18:03:10.139907 containerd[1592]: time="2025-05-14T18:03:10.139847545Z" level=info msg="StartContainer for \"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\"" May 14 18:03:10.140890 containerd[1592]: time="2025-05-14T18:03:10.140848484Z" level=info msg="connecting to shim 9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41" address="unix:///run/containerd/s/990a13744628832c31e0767700b5400f0f20c4bef29b9f97e30805de843e3ecc" protocol=ttrpc version=3 May 14 18:03:10.204846 systemd[1]: Started cri-containerd-9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41.scope - libcontainer container 9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41. May 14 18:03:10.240612 containerd[1592]: time="2025-05-14T18:03:10.240563618Z" level=info msg="StartContainer for \"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\" returns successfully" May 14 18:03:10.251585 systemd[1]: cri-containerd-9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41.scope: Deactivated successfully. May 14 18:03:10.253900 containerd[1592]: time="2025-05-14T18:03:10.253769726Z" level=info msg="received exit event container_id:\"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\" id:\"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\" pid:3157 exited_at:{seconds:1747245790 nanos:252665392}" May 14 18:03:10.253900 containerd[1592]: time="2025-05-14T18:03:10.253857932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\" id:\"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\" pid:3157 exited_at:{seconds:1747245790 nanos:252665392}" May 14 18:03:10.274719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41-rootfs.mount: Deactivated successfully. May 14 18:03:11.090701 kubelet[2723]: E0514 18:03:11.090650 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:11.092724 containerd[1592]: time="2025-05-14T18:03:11.092674243Z" level=info msg="CreateContainer within sandbox \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:03:11.193110 containerd[1592]: time="2025-05-14T18:03:11.193052728Z" level=info msg="Container d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:11.200619 containerd[1592]: time="2025-05-14T18:03:11.200561737Z" level=info msg="CreateContainer within sandbox \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\"" May 14 18:03:11.201273 containerd[1592]: time="2025-05-14T18:03:11.201211172Z" level=info msg="StartContainer for \"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\"" May 14 18:03:11.203170 containerd[1592]: time="2025-05-14T18:03:11.203117980Z" level=info msg="connecting to shim d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544" address="unix:///run/containerd/s/990a13744628832c31e0767700b5400f0f20c4bef29b9f97e30805de843e3ecc" protocol=ttrpc version=3 May 14 18:03:11.230644 systemd[1]: Started cri-containerd-d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544.scope - libcontainer container d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544. May 14 18:03:11.263716 containerd[1592]: time="2025-05-14T18:03:11.263568715Z" level=info msg="StartContainer for \"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\" returns successfully" May 14 18:03:11.278718 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:03:11.279204 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:03:11.279939 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 18:03:11.282224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:03:11.284120 containerd[1592]: time="2025-05-14T18:03:11.284091761Z" level=info msg="received exit event container_id:\"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\" id:\"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\" pid:3200 exited_at:{seconds:1747245791 nanos:283845006}" May 14 18:03:11.284808 containerd[1592]: time="2025-05-14T18:03:11.284760252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\" id:\"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\" pid:3200 exited_at:{seconds:1747245791 nanos:283845006}" May 14 18:03:11.285178 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:03:11.285909 systemd[1]: cri-containerd-d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544.scope: Deactivated successfully. May 14 18:03:11.315135 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:03:12.094586 kubelet[2723]: E0514 18:03:12.094553 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:12.097449 containerd[1592]: time="2025-05-14T18:03:12.097402608Z" level=info msg="CreateContainer within sandbox \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:03:12.112034 containerd[1592]: time="2025-05-14T18:03:12.111970457Z" level=info msg="Container 33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:12.124109 containerd[1592]: time="2025-05-14T18:03:12.124058578Z" level=info msg="CreateContainer within sandbox \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\"" May 14 18:03:12.124683 containerd[1592]: time="2025-05-14T18:03:12.124654092Z" level=info msg="StartContainer for \"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\"" May 14 18:03:12.126318 containerd[1592]: time="2025-05-14T18:03:12.126288434Z" level=info msg="connecting to shim 33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41" address="unix:///run/containerd/s/990a13744628832c31e0767700b5400f0f20c4bef29b9f97e30805de843e3ecc" protocol=ttrpc version=3 May 14 18:03:12.149707 systemd[1]: Started cri-containerd-33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41.scope - libcontainer container 33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41. May 14 18:03:12.196480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544-rootfs.mount: Deactivated successfully. May 14 18:03:12.211712 systemd[1]: cri-containerd-33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41.scope: Deactivated successfully. May 14 18:03:12.212760 containerd[1592]: time="2025-05-14T18:03:12.212729084Z" level=info msg="StartContainer for \"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\" returns successfully" May 14 18:03:12.213580 containerd[1592]: time="2025-05-14T18:03:12.213265115Z" level=info msg="received exit event container_id:\"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\" id:\"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\" pid:3247 exited_at:{seconds:1747245792 nanos:213137745}" May 14 18:03:12.213580 containerd[1592]: time="2025-05-14T18:03:12.213379872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\" id:\"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\" pid:3247 exited_at:{seconds:1747245792 nanos:213137745}" May 14 18:03:12.244813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41-rootfs.mount: Deactivated successfully. May 14 18:03:12.257421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312362202.mount: Deactivated successfully. May 14 18:03:13.101883 kubelet[2723]: E0514 18:03:13.101833 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:13.103597 containerd[1592]: time="2025-05-14T18:03:13.103517130Z" level=info msg="CreateContainer within sandbox \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:03:13.392740 containerd[1592]: time="2025-05-14T18:03:13.392635004Z" level=info msg="Container 5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:13.641123 containerd[1592]: time="2025-05-14T18:03:13.641086336Z" level=info msg="CreateContainer within sandbox \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\"" May 14 18:03:13.641426 containerd[1592]: time="2025-05-14T18:03:13.641398353Z" level=info msg="StartContainer for \"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\"" May 14 18:03:13.642319 containerd[1592]: time="2025-05-14T18:03:13.642292981Z" level=info msg="connecting to shim 5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478" address="unix:///run/containerd/s/990a13744628832c31e0767700b5400f0f20c4bef29b9f97e30805de843e3ecc" protocol=ttrpc version=3 May 14 18:03:13.659675 systemd[1]: Started cri-containerd-5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478.scope - libcontainer container 5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478. May 14 18:03:13.688183 systemd[1]: cri-containerd-5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478.scope: Deactivated successfully. May 14 18:03:13.689424 containerd[1592]: time="2025-05-14T18:03:13.689391572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\" id:\"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\" pid:3303 exited_at:{seconds:1747245793 nanos:688474643}" May 14 18:03:13.693635 containerd[1592]: time="2025-05-14T18:03:13.693603715Z" level=info msg="received exit event container_id:\"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\" id:\"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\" pid:3303 exited_at:{seconds:1747245793 nanos:688474643}" May 14 18:03:13.701310 containerd[1592]: time="2025-05-14T18:03:13.701219345Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:13.701517 containerd[1592]: time="2025-05-14T18:03:13.701472342Z" level=info msg="StartContainer for \"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\" returns successfully" May 14 18:03:13.703167 containerd[1592]: time="2025-05-14T18:03:13.703142251Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 14 18:03:13.708366 containerd[1592]: time="2025-05-14T18:03:13.708321747Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:13.709768 containerd[1592]: time="2025-05-14T18:03:13.709238356Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.597484422s" May 14 18:03:13.709837 containerd[1592]: time="2025-05-14T18:03:13.709773645Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 14 18:03:13.711950 containerd[1592]: time="2025-05-14T18:03:13.711914853Z" level=info msg="CreateContainer within sandbox \"810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 18:03:13.716586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478-rootfs.mount: Deactivated successfully. May 14 18:03:13.882947 containerd[1592]: time="2025-05-14T18:03:13.882901914Z" level=info msg="Container fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:13.892237 containerd[1592]: time="2025-05-14T18:03:13.892163478Z" level=info msg="CreateContainer within sandbox \"810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\"" May 14 18:03:13.892960 containerd[1592]: time="2025-05-14T18:03:13.892921016Z" level=info msg="StartContainer for \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\"" May 14 18:03:13.893959 containerd[1592]: time="2025-05-14T18:03:13.893929288Z" level=info msg="connecting to shim fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030" address="unix:///run/containerd/s/a48f7e210c5a5ed510c1f5a5478e5c7550f14fd1786c43f627bf9681453e4a2d" protocol=ttrpc version=3 May 14 18:03:13.918770 systemd[1]: Started cri-containerd-fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030.scope - libcontainer container fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030. May 14 18:03:14.001550 containerd[1592]: time="2025-05-14T18:03:14.001474550Z" level=info msg="StartContainer for \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\" returns successfully" May 14 18:03:14.105001 kubelet[2723]: E0514 18:03:14.104952 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:14.109178 kubelet[2723]: E0514 18:03:14.109143 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:14.110857 containerd[1592]: time="2025-05-14T18:03:14.110810776Z" level=info msg="CreateContainer within sandbox \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:03:14.203848 kubelet[2723]: I0514 18:03:14.203680 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-74jjp" podStartSLOduration=1.416998048 podStartE2EDuration="15.203660534s" podCreationTimestamp="2025-05-14 18:02:59 +0000 UTC" firstStartedPulling="2025-05-14 18:02:59.923729516 +0000 UTC m=+7.010871008" lastFinishedPulling="2025-05-14 18:03:13.710392002 +0000 UTC m=+20.797533494" observedRunningTime="2025-05-14 18:03:14.203241414 +0000 UTC m=+21.290382916" watchObservedRunningTime="2025-05-14 18:03:14.203660534 +0000 UTC m=+21.290802026" May 14 18:03:14.220316 containerd[1592]: time="2025-05-14T18:03:14.220257110Z" level=info msg="Container 4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:14.236489 containerd[1592]: time="2025-05-14T18:03:14.236423515Z" level=info msg="CreateContainer within sandbox \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\"" May 14 18:03:14.237087 containerd[1592]: time="2025-05-14T18:03:14.237053152Z" level=info msg="StartContainer for \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\"" May 14 18:03:14.238231 containerd[1592]: time="2025-05-14T18:03:14.238160429Z" level=info msg="connecting to shim 4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0" address="unix:///run/containerd/s/990a13744628832c31e0767700b5400f0f20c4bef29b9f97e30805de843e3ecc" protocol=ttrpc version=3 May 14 18:03:14.277728 systemd[1]: Started cri-containerd-4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0.scope - libcontainer container 4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0. May 14 18:03:14.349720 containerd[1592]: time="2025-05-14T18:03:14.349674110Z" level=info msg="StartContainer for \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" returns successfully" May 14 18:03:14.450024 containerd[1592]: time="2025-05-14T18:03:14.449980295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" id:\"50063129d3330e79d82bfabc84991901cb364479bed52f674c13c487147cea2f\" pid:3408 exited_at:{seconds:1747245794 nanos:449655462}" May 14 18:03:14.466198 kubelet[2723]: I0514 18:03:14.466057 2723 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 18:03:14.761866 systemd[1]: Created slice kubepods-burstable-pod75727e82_9ff7_46c3_96bc_bd6266674e02.slice - libcontainer container kubepods-burstable-pod75727e82_9ff7_46c3_96bc_bd6266674e02.slice. May 14 18:03:14.772740 systemd[1]: Created slice kubepods-burstable-podf6dafd44_09be_41a4_813b_1dfdbc5ecd73.slice - libcontainer container kubepods-burstable-podf6dafd44_09be_41a4_813b_1dfdbc5ecd73.slice. May 14 18:03:14.889236 kubelet[2723]: I0514 18:03:14.889172 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2znvm\" (UniqueName: \"kubernetes.io/projected/75727e82-9ff7-46c3-96bc-bd6266674e02-kube-api-access-2znvm\") pod \"coredns-6f6b679f8f-nmd4f\" (UID: \"75727e82-9ff7-46c3-96bc-bd6266674e02\") " pod="kube-system/coredns-6f6b679f8f-nmd4f" May 14 18:03:14.889635 kubelet[2723]: I0514 18:03:14.889324 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zldq\" (UniqueName: \"kubernetes.io/projected/f6dafd44-09be-41a4-813b-1dfdbc5ecd73-kube-api-access-9zldq\") pod \"coredns-6f6b679f8f-d48wh\" (UID: \"f6dafd44-09be-41a4-813b-1dfdbc5ecd73\") " pod="kube-system/coredns-6f6b679f8f-d48wh" May 14 18:03:14.889635 kubelet[2723]: I0514 18:03:14.889353 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6dafd44-09be-41a4-813b-1dfdbc5ecd73-config-volume\") pod \"coredns-6f6b679f8f-d48wh\" (UID: \"f6dafd44-09be-41a4-813b-1dfdbc5ecd73\") " pod="kube-system/coredns-6f6b679f8f-d48wh" May 14 18:03:14.889635 kubelet[2723]: I0514 18:03:14.889375 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75727e82-9ff7-46c3-96bc-bd6266674e02-config-volume\") pod \"coredns-6f6b679f8f-nmd4f\" (UID: \"75727e82-9ff7-46c3-96bc-bd6266674e02\") " pod="kube-system/coredns-6f6b679f8f-nmd4f" May 14 18:03:15.068625 kubelet[2723]: E0514 18:03:15.068461 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:15.069180 containerd[1592]: time="2025-05-14T18:03:15.069115351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nmd4f,Uid:75727e82-9ff7-46c3-96bc-bd6266674e02,Namespace:kube-system,Attempt:0,}" May 14 18:03:15.076314 kubelet[2723]: E0514 18:03:15.076275 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:15.077636 containerd[1592]: time="2025-05-14T18:03:15.077602015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d48wh,Uid:f6dafd44-09be-41a4-813b-1dfdbc5ecd73,Namespace:kube-system,Attempt:0,}" May 14 18:03:15.223681 kubelet[2723]: E0514 18:03:15.223641 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:15.224163 kubelet[2723]: E0514 18:03:15.223709 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:15.271152 kubelet[2723]: I0514 18:03:15.271079 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pc6dq" podStartSLOduration=5.910035561 podStartE2EDuration="16.2710646s" podCreationTimestamp="2025-05-14 18:02:59 +0000 UTC" firstStartedPulling="2025-05-14 18:02:59.750411323 +0000 UTC m=+6.837552816" lastFinishedPulling="2025-05-14 18:03:10.111440363 +0000 UTC m=+17.198581855" observedRunningTime="2025-05-14 18:03:15.27092638 +0000 UTC m=+22.358067882" watchObservedRunningTime="2025-05-14 18:03:15.2710646 +0000 UTC m=+22.358206092" May 14 18:03:16.224280 kubelet[2723]: E0514 18:03:16.224233 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:17.226282 kubelet[2723]: E0514 18:03:17.226218 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:17.541299 systemd-networkd[1492]: cilium_host: Link UP May 14 18:03:17.541506 systemd-networkd[1492]: cilium_net: Link UP May 14 18:03:17.542379 systemd-networkd[1492]: cilium_net: Gained carrier May 14 18:03:17.543087 systemd-networkd[1492]: cilium_host: Gained carrier May 14 18:03:17.656381 systemd-networkd[1492]: cilium_vxlan: Link UP May 14 18:03:17.656391 systemd-networkd[1492]: cilium_vxlan: Gained carrier May 14 18:03:17.898580 kernel: NET: Registered PF_ALG protocol family May 14 18:03:18.305857 systemd-networkd[1492]: cilium_host: Gained IPv6LL May 14 18:03:18.497786 systemd-networkd[1492]: cilium_net: Gained IPv6LL May 14 18:03:18.582953 systemd-networkd[1492]: lxc_health: Link UP May 14 18:03:18.583920 systemd-networkd[1492]: lxc_health: Gained carrier May 14 18:03:18.697603 kernel: eth0: renamed from tmp3d138 May 14 18:03:18.699165 systemd-networkd[1492]: lxcb99ebbb6f9db: Link UP May 14 18:03:18.702432 systemd-networkd[1492]: lxcb99ebbb6f9db: Gained carrier May 14 18:03:18.703211 systemd-networkd[1492]: lxca3fa92e762c5: Link UP May 14 18:03:18.708553 kernel: eth0: renamed from tmp02cf6 May 14 18:03:18.710863 systemd-networkd[1492]: lxca3fa92e762c5: Gained carrier May 14 18:03:19.394597 systemd-networkd[1492]: cilium_vxlan: Gained IPv6LL May 14 18:03:19.635425 kubelet[2723]: E0514 18:03:19.635245 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:20.033734 systemd-networkd[1492]: lxc_health: Gained IPv6LL May 14 18:03:20.225872 systemd-networkd[1492]: lxca3fa92e762c5: Gained IPv6LL May 14 18:03:20.233773 kubelet[2723]: E0514 18:03:20.233726 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:20.737737 systemd-networkd[1492]: lxcb99ebbb6f9db: Gained IPv6LL May 14 18:03:21.234887 kubelet[2723]: E0514 18:03:21.234787 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:22.273843 containerd[1592]: time="2025-05-14T18:03:22.273717315Z" level=info msg="connecting to shim 3d138364b493833fa609da99f13c56b2f23fe8c45ea210ed716220d0bf663e5c" address="unix:///run/containerd/s/f9d2763812a6346515dfb114b3e29e79d0451ba4b0da2a4b0b1c7c4c4a95b4f8" namespace=k8s.io protocol=ttrpc version=3 May 14 18:03:22.275833 containerd[1592]: time="2025-05-14T18:03:22.275767463Z" level=info msg="connecting to shim 02cf6478e6293e41a5f2f169c30f6ed06007e410d2336b690fccbe679feda17a" address="unix:///run/containerd/s/f9590a18e199714890552b9394acbe8e7e5c217d416cde7cf38220eaacc0fb48" namespace=k8s.io protocol=ttrpc version=3 May 14 18:03:22.309907 systemd[1]: Started cri-containerd-02cf6478e6293e41a5f2f169c30f6ed06007e410d2336b690fccbe679feda17a.scope - libcontainer container 02cf6478e6293e41a5f2f169c30f6ed06007e410d2336b690fccbe679feda17a. May 14 18:03:22.312160 systemd[1]: Started cri-containerd-3d138364b493833fa609da99f13c56b2f23fe8c45ea210ed716220d0bf663e5c.scope - libcontainer container 3d138364b493833fa609da99f13c56b2f23fe8c45ea210ed716220d0bf663e5c. May 14 18:03:22.326646 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:03:22.329053 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:03:22.469562 containerd[1592]: time="2025-05-14T18:03:22.469463147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d48wh,Uid:f6dafd44-09be-41a4-813b-1dfdbc5ecd73,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d138364b493833fa609da99f13c56b2f23fe8c45ea210ed716220d0bf663e5c\"" May 14 18:03:22.473152 kubelet[2723]: E0514 18:03:22.473114 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:22.478927 containerd[1592]: time="2025-05-14T18:03:22.478873243Z" level=info msg="CreateContainer within sandbox \"3d138364b493833fa609da99f13c56b2f23fe8c45ea210ed716220d0bf663e5c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:03:22.494505 containerd[1592]: time="2025-05-14T18:03:22.494461604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nmd4f,Uid:75727e82-9ff7-46c3-96bc-bd6266674e02,Namespace:kube-system,Attempt:0,} returns sandbox id \"02cf6478e6293e41a5f2f169c30f6ed06007e410d2336b690fccbe679feda17a\"" May 14 18:03:22.495228 kubelet[2723]: E0514 18:03:22.495201 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:22.496970 containerd[1592]: time="2025-05-14T18:03:22.496925590Z" level=info msg="CreateContainer within sandbox \"02cf6478e6293e41a5f2f169c30f6ed06007e410d2336b690fccbe679feda17a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:03:22.651272 systemd[1]: Started sshd@9-10.0.0.42:22-10.0.0.1:36224.service - OpenSSH per-connection server daemon (10.0.0.1:36224). May 14 18:03:22.660792 containerd[1592]: time="2025-05-14T18:03:22.660729080Z" level=info msg="Container add47d5e443a3f8f6e516b9ac4e466a58a21e66d746ed5fc7f552cce8c9cd0d9: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:22.661517 containerd[1592]: time="2025-05-14T18:03:22.661482467Z" level=info msg="Container 38d596fc5433677ae9ab1494521cf1ce88413e5e91c0ffd39be528110911c154: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:22.672353 containerd[1592]: time="2025-05-14T18:03:22.672310231Z" level=info msg="CreateContainer within sandbox \"02cf6478e6293e41a5f2f169c30f6ed06007e410d2336b690fccbe679feda17a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"add47d5e443a3f8f6e516b9ac4e466a58a21e66d746ed5fc7f552cce8c9cd0d9\"" May 14 18:03:22.673329 containerd[1592]: time="2025-05-14T18:03:22.673277170Z" level=info msg="StartContainer for \"add47d5e443a3f8f6e516b9ac4e466a58a21e66d746ed5fc7f552cce8c9cd0d9\"" May 14 18:03:22.673899 containerd[1592]: time="2025-05-14T18:03:22.673866990Z" level=info msg="CreateContainer within sandbox \"3d138364b493833fa609da99f13c56b2f23fe8c45ea210ed716220d0bf663e5c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38d596fc5433677ae9ab1494521cf1ce88413e5e91c0ffd39be528110911c154\"" May 14 18:03:22.674301 containerd[1592]: time="2025-05-14T18:03:22.674283584Z" level=info msg="StartContainer for \"38d596fc5433677ae9ab1494521cf1ce88413e5e91c0ffd39be528110911c154\"" May 14 18:03:22.675163 containerd[1592]: time="2025-05-14T18:03:22.675135648Z" level=info msg="connecting to shim 38d596fc5433677ae9ab1494521cf1ce88413e5e91c0ffd39be528110911c154" address="unix:///run/containerd/s/f9d2763812a6346515dfb114b3e29e79d0451ba4b0da2a4b0b1c7c4c4a95b4f8" protocol=ttrpc version=3 May 14 18:03:22.676233 containerd[1592]: time="2025-05-14T18:03:22.676188568Z" level=info msg="connecting to shim add47d5e443a3f8f6e516b9ac4e466a58a21e66d746ed5fc7f552cce8c9cd0d9" address="unix:///run/containerd/s/f9590a18e199714890552b9394acbe8e7e5c217d416cde7cf38220eaacc0fb48" protocol=ttrpc version=3 May 14 18:03:22.698679 systemd[1]: Started cri-containerd-add47d5e443a3f8f6e516b9ac4e466a58a21e66d746ed5fc7f552cce8c9cd0d9.scope - libcontainer container add47d5e443a3f8f6e516b9ac4e466a58a21e66d746ed5fc7f552cce8c9cd0d9. May 14 18:03:22.702706 systemd[1]: Started cri-containerd-38d596fc5433677ae9ab1494521cf1ce88413e5e91c0ffd39be528110911c154.scope - libcontainer container 38d596fc5433677ae9ab1494521cf1ce88413e5e91c0ffd39be528110911c154. May 14 18:03:22.712369 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 36224 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:22.714289 sshd-session[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:22.723474 systemd-logind[1568]: New session 10 of user core. May 14 18:03:22.730883 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:03:22.740057 containerd[1592]: time="2025-05-14T18:03:22.739971031Z" level=info msg="StartContainer for \"38d596fc5433677ae9ab1494521cf1ce88413e5e91c0ffd39be528110911c154\" returns successfully" May 14 18:03:22.761507 containerd[1592]: time="2025-05-14T18:03:22.761468560Z" level=info msg="StartContainer for \"add47d5e443a3f8f6e516b9ac4e466a58a21e66d746ed5fc7f552cce8c9cd0d9\" returns successfully" May 14 18:03:22.894100 sshd[4029]: Connection closed by 10.0.0.1 port 36224 May 14 18:03:22.894772 sshd-session[3982]: pam_unix(sshd:session): session closed for user core May 14 18:03:22.900049 systemd[1]: sshd@9-10.0.0.42:22-10.0.0.1:36224.service: Deactivated successfully. May 14 18:03:22.902914 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:03:22.906249 systemd-logind[1568]: Session 10 logged out. Waiting for processes to exit. May 14 18:03:22.908477 systemd-logind[1568]: Removed session 10. May 14 18:03:23.242542 kubelet[2723]: E0514 18:03:23.242480 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:23.248486 kubelet[2723]: E0514 18:03:23.247608 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:23.298920 kubelet[2723]: I0514 18:03:23.298598 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-d48wh" podStartSLOduration=24.298581097 podStartE2EDuration="24.298581097s" podCreationTimestamp="2025-05-14 18:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:03:23.29821054 +0000 UTC m=+30.385352062" watchObservedRunningTime="2025-05-14 18:03:23.298581097 +0000 UTC m=+30.385722599" May 14 18:03:23.538109 kubelet[2723]: I0514 18:03:23.537288 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-nmd4f" podStartSLOduration=24.537269924 podStartE2EDuration="24.537269924s" podCreationTimestamp="2025-05-14 18:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:03:23.536884368 +0000 UTC m=+30.624025880" watchObservedRunningTime="2025-05-14 18:03:23.537269924 +0000 UTC m=+30.624411406" May 14 18:03:24.249546 kubelet[2723]: E0514 18:03:24.249504 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:24.249661 kubelet[2723]: E0514 18:03:24.249622 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:25.251506 kubelet[2723]: E0514 18:03:25.251463 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:27.908198 systemd[1]: Started sshd@10-10.0.0.42:22-10.0.0.1:55610.service - OpenSSH per-connection server daemon (10.0.0.1:55610). May 14 18:03:27.947183 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 55610 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:27.949034 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:27.954161 systemd-logind[1568]: New session 11 of user core. May 14 18:03:27.969706 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:03:28.103931 sshd[4082]: Connection closed by 10.0.0.1 port 55610 May 14 18:03:28.104241 sshd-session[4080]: pam_unix(sshd:session): session closed for user core May 14 18:03:28.109712 systemd[1]: sshd@10-10.0.0.42:22-10.0.0.1:55610.service: Deactivated successfully. May 14 18:03:28.112356 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:03:28.113240 systemd-logind[1568]: Session 11 logged out. Waiting for processes to exit. May 14 18:03:28.114972 systemd-logind[1568]: Removed session 11. May 14 18:03:33.122264 systemd[1]: Started sshd@11-10.0.0.42:22-10.0.0.1:55626.service - OpenSSH per-connection server daemon (10.0.0.1:55626). May 14 18:03:33.203040 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 55626 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:33.204939 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:33.210755 systemd-logind[1568]: New session 12 of user core. May 14 18:03:33.224666 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:03:33.338284 sshd[4101]: Connection closed by 10.0.0.1 port 55626 May 14 18:03:33.338418 sshd-session[4099]: pam_unix(sshd:session): session closed for user core May 14 18:03:33.343685 systemd[1]: sshd@11-10.0.0.42:22-10.0.0.1:55626.service: Deactivated successfully. May 14 18:03:33.346380 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:03:33.347319 systemd-logind[1568]: Session 12 logged out. Waiting for processes to exit. May 14 18:03:33.349087 systemd-logind[1568]: Removed session 12. May 14 18:03:38.355001 systemd[1]: Started sshd@12-10.0.0.42:22-10.0.0.1:50080.service - OpenSSH per-connection server daemon (10.0.0.1:50080). May 14 18:03:38.413554 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 50080 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:38.415504 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:38.420667 systemd-logind[1568]: New session 13 of user core. May 14 18:03:38.429825 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:03:38.547616 sshd[4118]: Connection closed by 10.0.0.1 port 50080 May 14 18:03:38.547963 sshd-session[4116]: pam_unix(sshd:session): session closed for user core May 14 18:03:38.558252 systemd[1]: sshd@12-10.0.0.42:22-10.0.0.1:50080.service: Deactivated successfully. May 14 18:03:38.560508 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:03:38.561661 systemd-logind[1568]: Session 13 logged out. Waiting for processes to exit. May 14 18:03:38.565875 systemd[1]: Started sshd@13-10.0.0.42:22-10.0.0.1:50084.service - OpenSSH per-connection server daemon (10.0.0.1:50084). May 14 18:03:38.566632 systemd-logind[1568]: Removed session 13. May 14 18:03:38.621788 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 50084 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:38.623855 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:38.630024 systemd-logind[1568]: New session 14 of user core. May 14 18:03:38.638799 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:03:38.806316 sshd[4134]: Connection closed by 10.0.0.1 port 50084 May 14 18:03:38.808498 sshd-session[4132]: pam_unix(sshd:session): session closed for user core May 14 18:03:38.817889 systemd[1]: sshd@13-10.0.0.42:22-10.0.0.1:50084.service: Deactivated successfully. May 14 18:03:38.821141 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:03:38.822248 systemd-logind[1568]: Session 14 logged out. Waiting for processes to exit. May 14 18:03:38.827959 systemd[1]: Started sshd@14-10.0.0.42:22-10.0.0.1:50096.service - OpenSSH per-connection server daemon (10.0.0.1:50096). May 14 18:03:38.829016 systemd-logind[1568]: Removed session 14. May 14 18:03:38.885984 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 50096 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:38.887852 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:38.893314 systemd-logind[1568]: New session 15 of user core. May 14 18:03:38.905841 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:03:39.034699 sshd[4148]: Connection closed by 10.0.0.1 port 50096 May 14 18:03:39.035013 sshd-session[4146]: pam_unix(sshd:session): session closed for user core May 14 18:03:39.038700 systemd[1]: sshd@14-10.0.0.42:22-10.0.0.1:50096.service: Deactivated successfully. May 14 18:03:39.040795 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:03:39.042323 systemd-logind[1568]: Session 15 logged out. Waiting for processes to exit. May 14 18:03:39.043366 systemd-logind[1568]: Removed session 15. May 14 18:03:44.055966 systemd[1]: Started sshd@15-10.0.0.42:22-10.0.0.1:50108.service - OpenSSH per-connection server daemon (10.0.0.1:50108). May 14 18:03:44.103988 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 50108 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:44.105517 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:44.109907 systemd-logind[1568]: New session 16 of user core. May 14 18:03:44.119772 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:03:44.275373 sshd[4164]: Connection closed by 10.0.0.1 port 50108 May 14 18:03:44.275722 sshd-session[4162]: pam_unix(sshd:session): session closed for user core May 14 18:03:44.279798 systemd[1]: sshd@15-10.0.0.42:22-10.0.0.1:50108.service: Deactivated successfully. May 14 18:03:44.281885 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:03:44.282761 systemd-logind[1568]: Session 16 logged out. Waiting for processes to exit. May 14 18:03:44.284225 systemd-logind[1568]: Removed session 16. May 14 18:03:49.289831 systemd[1]: Started sshd@16-10.0.0.42:22-10.0.0.1:52094.service - OpenSSH per-connection server daemon (10.0.0.1:52094). May 14 18:03:49.348264 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 52094 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:49.350856 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:49.356784 systemd-logind[1568]: New session 17 of user core. May 14 18:03:49.364835 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:03:49.485072 sshd[4179]: Connection closed by 10.0.0.1 port 52094 May 14 18:03:49.487412 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 14 18:03:49.499682 systemd[1]: sshd@16-10.0.0.42:22-10.0.0.1:52094.service: Deactivated successfully. May 14 18:03:49.501473 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:03:49.502247 systemd-logind[1568]: Session 17 logged out. Waiting for processes to exit. May 14 18:03:49.505039 systemd[1]: Started sshd@17-10.0.0.42:22-10.0.0.1:52098.service - OpenSSH per-connection server daemon (10.0.0.1:52098). May 14 18:03:49.505732 systemd-logind[1568]: Removed session 17. May 14 18:03:49.554952 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 52098 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:49.557079 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:49.562825 systemd-logind[1568]: New session 18 of user core. May 14 18:03:49.572833 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:03:50.005167 sshd[4195]: Connection closed by 10.0.0.1 port 52098 May 14 18:03:50.005876 sshd-session[4192]: pam_unix(sshd:session): session closed for user core May 14 18:03:50.018485 systemd[1]: sshd@17-10.0.0.42:22-10.0.0.1:52098.service: Deactivated successfully. May 14 18:03:50.020949 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:03:50.021892 systemd-logind[1568]: Session 18 logged out. Waiting for processes to exit. May 14 18:03:50.025577 systemd[1]: Started sshd@18-10.0.0.42:22-10.0.0.1:52112.service - OpenSSH per-connection server daemon (10.0.0.1:52112). May 14 18:03:50.026314 systemd-logind[1568]: Removed session 18. May 14 18:03:50.079875 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 52112 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:50.081741 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:50.086608 systemd-logind[1568]: New session 19 of user core. May 14 18:03:50.096722 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:03:51.659987 sshd[4208]: Connection closed by 10.0.0.1 port 52112 May 14 18:03:51.660322 sshd-session[4206]: pam_unix(sshd:session): session closed for user core May 14 18:03:51.672130 systemd[1]: sshd@18-10.0.0.42:22-10.0.0.1:52112.service: Deactivated successfully. May 14 18:03:51.675180 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:03:51.676109 systemd-logind[1568]: Session 19 logged out. Waiting for processes to exit. May 14 18:03:51.679151 systemd-logind[1568]: Removed session 19. May 14 18:03:51.680887 systemd[1]: Started sshd@19-10.0.0.42:22-10.0.0.1:52128.service - OpenSSH per-connection server daemon (10.0.0.1:52128). May 14 18:03:51.723604 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 52128 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:51.725135 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:51.730135 systemd-logind[1568]: New session 20 of user core. May 14 18:03:51.738702 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:03:51.962800 sshd[4234]: Connection closed by 10.0.0.1 port 52128 May 14 18:03:51.963712 sshd-session[4232]: pam_unix(sshd:session): session closed for user core May 14 18:03:51.974861 systemd[1]: sshd@19-10.0.0.42:22-10.0.0.1:52128.service: Deactivated successfully. May 14 18:03:51.976929 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:03:51.977862 systemd-logind[1568]: Session 20 logged out. Waiting for processes to exit. May 14 18:03:51.981075 systemd[1]: Started sshd@20-10.0.0.42:22-10.0.0.1:52142.service - OpenSSH per-connection server daemon (10.0.0.1:52142). May 14 18:03:51.982281 systemd-logind[1568]: Removed session 20. May 14 18:03:52.028367 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 52142 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:52.029956 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:52.034307 systemd-logind[1568]: New session 21 of user core. May 14 18:03:52.043668 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:03:52.158315 sshd[4248]: Connection closed by 10.0.0.1 port 52142 May 14 18:03:52.159050 sshd-session[4246]: pam_unix(sshd:session): session closed for user core May 14 18:03:52.164185 systemd[1]: sshd@20-10.0.0.42:22-10.0.0.1:52142.service: Deactivated successfully. May 14 18:03:52.166455 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:03:52.167480 systemd-logind[1568]: Session 21 logged out. Waiting for processes to exit. May 14 18:03:52.169194 systemd-logind[1568]: Removed session 21. May 14 18:03:57.173072 systemd[1]: Started sshd@21-10.0.0.42:22-10.0.0.1:38792.service - OpenSSH per-connection server daemon (10.0.0.1:38792). May 14 18:03:57.231259 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 38792 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:57.232858 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:57.237811 systemd-logind[1568]: New session 22 of user core. May 14 18:03:57.245773 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:03:57.357896 sshd[4265]: Connection closed by 10.0.0.1 port 38792 May 14 18:03:57.358779 sshd-session[4263]: pam_unix(sshd:session): session closed for user core May 14 18:03:57.363228 systemd[1]: sshd@21-10.0.0.42:22-10.0.0.1:38792.service: Deactivated successfully. May 14 18:03:57.365778 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:03:57.368974 systemd-logind[1568]: Session 22 logged out. Waiting for processes to exit. May 14 18:03:57.370035 systemd-logind[1568]: Removed session 22. May 14 18:04:02.376729 systemd[1]: Started sshd@22-10.0.0.42:22-10.0.0.1:38800.service - OpenSSH per-connection server daemon (10.0.0.1:38800). May 14 18:04:02.440838 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 38800 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:02.443316 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:02.450120 systemd-logind[1568]: New session 23 of user core. May 14 18:04:02.461769 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:04:02.580464 sshd[4285]: Connection closed by 10.0.0.1 port 38800 May 14 18:04:02.581034 sshd-session[4283]: pam_unix(sshd:session): session closed for user core May 14 18:04:02.587040 systemd[1]: sshd@22-10.0.0.42:22-10.0.0.1:38800.service: Deactivated successfully. May 14 18:04:02.589595 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:04:02.590456 systemd-logind[1568]: Session 23 logged out. Waiting for processes to exit. May 14 18:04:02.591687 systemd-logind[1568]: Removed session 23. May 14 18:04:07.023177 kubelet[2723]: E0514 18:04:07.023136 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:07.023652 kubelet[2723]: E0514 18:04:07.023245 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:07.600394 systemd[1]: Started sshd@23-10.0.0.42:22-10.0.0.1:57860.service - OpenSSH per-connection server daemon (10.0.0.1:57860). May 14 18:04:07.650660 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 57860 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:07.652417 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:07.657223 systemd-logind[1568]: New session 24 of user core. May 14 18:04:07.663673 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:04:07.780081 sshd[4300]: Connection closed by 10.0.0.1 port 57860 May 14 18:04:07.780365 sshd-session[4298]: pam_unix(sshd:session): session closed for user core May 14 18:04:07.785241 systemd[1]: sshd@23-10.0.0.42:22-10.0.0.1:57860.service: Deactivated successfully. May 14 18:04:07.787706 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:04:07.788491 systemd-logind[1568]: Session 24 logged out. Waiting for processes to exit. May 14 18:04:07.790032 systemd-logind[1568]: Removed session 24. May 14 18:04:08.022910 kubelet[2723]: E0514 18:04:08.022863 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:12.794094 systemd[1]: Started sshd@24-10.0.0.42:22-10.0.0.1:57862.service - OpenSSH per-connection server daemon (10.0.0.1:57862). May 14 18:04:12.847059 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 57862 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:12.848647 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:12.853054 systemd-logind[1568]: New session 25 of user core. May 14 18:04:12.863707 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 18:04:12.974595 sshd[4316]: Connection closed by 10.0.0.1 port 57862 May 14 18:04:12.975168 sshd-session[4314]: pam_unix(sshd:session): session closed for user core May 14 18:04:12.984320 systemd[1]: sshd@24-10.0.0.42:22-10.0.0.1:57862.service: Deactivated successfully. May 14 18:04:12.986214 systemd[1]: session-25.scope: Deactivated successfully. May 14 18:04:12.987093 systemd-logind[1568]: Session 25 logged out. Waiting for processes to exit. May 14 18:04:12.990095 systemd[1]: Started sshd@25-10.0.0.42:22-10.0.0.1:57872.service - OpenSSH per-connection server daemon (10.0.0.1:57872). May 14 18:04:12.990697 systemd-logind[1568]: Removed session 25. May 14 18:04:13.040057 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 57872 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:13.041683 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:13.046253 systemd-logind[1568]: New session 26 of user core. May 14 18:04:13.055683 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 18:04:14.747263 containerd[1592]: time="2025-05-14T18:04:14.747218170Z" level=info msg="StopContainer for \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\" with timeout 30 (s)" May 14 18:04:14.756447 containerd[1592]: time="2025-05-14T18:04:14.756391406Z" level=info msg="Stop container \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\" with signal terminated" May 14 18:04:14.768213 systemd[1]: cri-containerd-fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030.scope: Deactivated successfully. May 14 18:04:14.769798 containerd[1592]: time="2025-05-14T18:04:14.769714099Z" level=info msg="received exit event container_id:\"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\" id:\"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\" pid:3340 exited_at:{seconds:1747245854 nanos:769309788}" May 14 18:04:14.769953 containerd[1592]: time="2025-05-14T18:04:14.769737904Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\" id:\"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\" pid:3340 exited_at:{seconds:1747245854 nanos:769309788}" May 14 18:04:14.777634 containerd[1592]: time="2025-05-14T18:04:14.777602257Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:04:14.778812 containerd[1592]: time="2025-05-14T18:04:14.778753971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" id:\"53b2145ae66e532c96a8688dc9c75e3ca16496332f03a3373bd64e17e68dbd91\" pid:4352 exited_at:{seconds:1747245854 nanos:778361123}" May 14 18:04:14.781300 containerd[1592]: time="2025-05-14T18:04:14.781270546Z" level=info msg="StopContainer for \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" with timeout 2 (s)" May 14 18:04:14.781615 containerd[1592]: time="2025-05-14T18:04:14.781592199Z" level=info msg="Stop container \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" with signal terminated" May 14 18:04:14.789189 systemd-networkd[1492]: lxc_health: Link DOWN May 14 18:04:14.789203 systemd-networkd[1492]: lxc_health: Lost carrier May 14 18:04:14.798053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030-rootfs.mount: Deactivated successfully. May 14 18:04:14.807960 systemd[1]: cri-containerd-4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0.scope: Deactivated successfully. May 14 18:04:14.808409 systemd[1]: cri-containerd-4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0.scope: Consumed 6.842s CPU time, 125.6M memory peak, 272K read from disk, 13.3M written to disk. May 14 18:04:14.809955 containerd[1592]: time="2025-05-14T18:04:14.809871195Z" level=info msg="received exit event container_id:\"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" id:\"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" pid:3375 exited_at:{seconds:1747245854 nanos:809604747}" May 14 18:04:14.810120 containerd[1592]: time="2025-05-14T18:04:14.809975774Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" id:\"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" pid:3375 exited_at:{seconds:1747245854 nanos:809604747}" May 14 18:04:14.815873 containerd[1592]: time="2025-05-14T18:04:14.815832702Z" level=info msg="StopContainer for \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\" returns successfully" May 14 18:04:14.818064 containerd[1592]: time="2025-05-14T18:04:14.818025158Z" level=info msg="StopPodSandbox for \"810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf\"" May 14 18:04:14.818187 containerd[1592]: time="2025-05-14T18:04:14.818099259Z" level=info msg="Container to stop \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:14.826248 systemd[1]: cri-containerd-810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf.scope: Deactivated successfully. May 14 18:04:14.828318 containerd[1592]: time="2025-05-14T18:04:14.828263294Z" level=info msg="TaskExit event in podsandbox handler container_id:\"810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf\" id:\"810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf\" pid:2935 exit_status:137 exited_at:{seconds:1747245854 nanos:827771607}" May 14 18:04:14.835008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0-rootfs.mount: Deactivated successfully. May 14 18:04:14.849272 containerd[1592]: time="2025-05-14T18:04:14.849218236Z" level=info msg="StopContainer for \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" returns successfully" May 14 18:04:14.850036 containerd[1592]: time="2025-05-14T18:04:14.849988474Z" level=info msg="StopPodSandbox for \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\"" May 14 18:04:14.850201 containerd[1592]: time="2025-05-14T18:04:14.850065360Z" level=info msg="Container to stop \"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:14.850201 containerd[1592]: time="2025-05-14T18:04:14.850077604Z" level=info msg="Container to stop \"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:14.850201 containerd[1592]: time="2025-05-14T18:04:14.850087322Z" level=info msg="Container to stop \"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:14.850201 containerd[1592]: time="2025-05-14T18:04:14.850095487Z" level=info msg="Container to stop \"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:14.850201 containerd[1592]: time="2025-05-14T18:04:14.850104135Z" level=info msg="Container to stop \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:14.857498 systemd[1]: cri-containerd-fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e.scope: Deactivated successfully. May 14 18:04:14.866996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf-rootfs.mount: Deactivated successfully. May 14 18:04:14.881014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e-rootfs.mount: Deactivated successfully. May 14 18:04:14.913949 containerd[1592]: time="2025-05-14T18:04:14.913828779Z" level=info msg="shim disconnected" id=fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e namespace=k8s.io May 14 18:04:14.913949 containerd[1592]: time="2025-05-14T18:04:14.913871190Z" level=warning msg="cleaning up after shim disconnected" id=fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e namespace=k8s.io May 14 18:04:14.921431 containerd[1592]: time="2025-05-14T18:04:14.913879736Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:04:14.921609 containerd[1592]: time="2025-05-14T18:04:14.914709878Z" level=info msg="shim disconnected" id=810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf namespace=k8s.io May 14 18:04:14.921609 containerd[1592]: time="2025-05-14T18:04:14.921491308Z" level=warning msg="cleaning up after shim disconnected" id=810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf namespace=k8s.io May 14 18:04:14.921609 containerd[1592]: time="2025-05-14T18:04:14.921501216Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:04:14.946858 containerd[1592]: time="2025-05-14T18:04:14.946707459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" id:\"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" pid:2864 exit_status:137 exited_at:{seconds:1747245854 nanos:859245961}" May 14 18:04:14.949404 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf-shm.mount: Deactivated successfully. May 14 18:04:14.949592 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e-shm.mount: Deactivated successfully. May 14 18:04:14.966396 containerd[1592]: time="2025-05-14T18:04:14.966330012Z" level=info msg="TearDown network for sandbox \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" successfully" May 14 18:04:14.966396 containerd[1592]: time="2025-05-14T18:04:14.966384246Z" level=info msg="StopPodSandbox for \"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" returns successfully" May 14 18:04:14.967712 containerd[1592]: time="2025-05-14T18:04:14.967644176Z" level=info msg="TearDown network for sandbox \"810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf\" successfully" May 14 18:04:14.967712 containerd[1592]: time="2025-05-14T18:04:14.967691547Z" level=info msg="StopPodSandbox for \"810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf\" returns successfully" May 14 18:04:14.971145 containerd[1592]: time="2025-05-14T18:04:14.971083689Z" level=info msg="received exit event sandbox_id:\"fec002e06e9017164c5701d99c56babbb44fa516c521f536fbe131736fe7486e\" exit_status:137 exited_at:{seconds:1747245854 nanos:859245961}" May 14 18:04:14.971517 containerd[1592]: time="2025-05-14T18:04:14.971470906Z" level=info msg="received exit event sandbox_id:\"810bd639f5bc43b23453339cbc096d200d429088b2339259f6f2e19a95df08cf\" exit_status:137 exited_at:{seconds:1747245854 nanos:827771607}" May 14 18:04:15.151089 kubelet[2723]: I0514 18:04:15.150934 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-etc-cni-netd\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.151089 kubelet[2723]: I0514 18:04:15.150982 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-cilium-cgroup\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.151089 kubelet[2723]: I0514 18:04:15.151004 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-hostproc\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.151089 kubelet[2723]: I0514 18:04:15.151030 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xl4g\" (UniqueName: \"kubernetes.io/projected/bb2f2220-829f-4115-b377-883fb2088506-kube-api-access-4xl4g\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.151089 kubelet[2723]: I0514 18:04:15.151045 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:15.151089 kubelet[2723]: I0514 18:04:15.151057 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb2f2220-829f-4115-b377-883fb2088506-cilium-config-path\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.151794 kubelet[2723]: I0514 18:04:15.151129 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-cni-path\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.151794 kubelet[2723]: I0514 18:04:15.151113 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:15.151794 kubelet[2723]: I0514 18:04:15.151156 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w94b\" (UniqueName: \"kubernetes.io/projected/61503865-77a4-44b7-94d8-f98e7eab9355-kube-api-access-4w94b\") pod \"61503865-77a4-44b7-94d8-f98e7eab9355\" (UID: \"61503865-77a4-44b7-94d8-f98e7eab9355\") " May 14 18:04:15.151794 kubelet[2723]: I0514 18:04:15.151179 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-bpf-maps\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.151794 kubelet[2723]: I0514 18:04:15.151180 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-hostproc" (OuterVolumeSpecName: "hostproc") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:15.151794 kubelet[2723]: I0514 18:04:15.151200 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb2f2220-829f-4115-b377-883fb2088506-hubble-tls\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.151991 kubelet[2723]: I0514 18:04:15.151202 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-cni-path" (OuterVolumeSpecName: "cni-path") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:15.151991 kubelet[2723]: I0514 18:04:15.151257 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-host-proc-sys-kernel\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.151991 kubelet[2723]: I0514 18:04:15.151285 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb2f2220-829f-4115-b377-883fb2088506-clustermesh-secrets\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.151991 kubelet[2723]: I0514 18:04:15.151312 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-host-proc-sys-net\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.151991 kubelet[2723]: I0514 18:04:15.151663 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61503865-77a4-44b7-94d8-f98e7eab9355-cilium-config-path\") pod \"61503865-77a4-44b7-94d8-f98e7eab9355\" (UID: \"61503865-77a4-44b7-94d8-f98e7eab9355\") " May 14 18:04:15.151991 kubelet[2723]: I0514 18:04:15.151689 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-xtables-lock\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.152983 kubelet[2723]: I0514 18:04:15.151708 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-cilium-run\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.152983 kubelet[2723]: I0514 18:04:15.151725 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-lib-modules\") pod \"bb2f2220-829f-4115-b377-883fb2088506\" (UID: \"bb2f2220-829f-4115-b377-883fb2088506\") " May 14 18:04:15.152983 kubelet[2723]: I0514 18:04:15.151765 2723 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.152983 kubelet[2723]: I0514 18:04:15.151777 2723 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.152983 kubelet[2723]: I0514 18:04:15.151791 2723 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.152983 kubelet[2723]: I0514 18:04:15.151801 2723 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.152983 kubelet[2723]: I0514 18:04:15.151826 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:15.153232 kubelet[2723]: I0514 18:04:15.151850 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:15.153677 kubelet[2723]: I0514 18:04:15.153596 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:15.153677 kubelet[2723]: I0514 18:04:15.153637 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:15.153677 kubelet[2723]: I0514 18:04:15.153659 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:15.156693 kubelet[2723]: I0514 18:04:15.156655 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:15.157691 kubelet[2723]: I0514 18:04:15.157605 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb2f2220-829f-4115-b377-883fb2088506-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:04:15.158556 kubelet[2723]: I0514 18:04:15.157766 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61503865-77a4-44b7-94d8-f98e7eab9355-kube-api-access-4w94b" (OuterVolumeSpecName: "kube-api-access-4w94b") pod "61503865-77a4-44b7-94d8-f98e7eab9355" (UID: "61503865-77a4-44b7-94d8-f98e7eab9355"). InnerVolumeSpecName "kube-api-access-4w94b". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:04:15.158946 kubelet[2723]: I0514 18:04:15.158912 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb2f2220-829f-4115-b377-883fb2088506-kube-api-access-4xl4g" (OuterVolumeSpecName: "kube-api-access-4xl4g") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "kube-api-access-4xl4g". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:04:15.161209 kubelet[2723]: I0514 18:04:15.160281 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb2f2220-829f-4115-b377-883fb2088506-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:04:15.161209 kubelet[2723]: I0514 18:04:15.160372 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61503865-77a4-44b7-94d8-f98e7eab9355-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "61503865-77a4-44b7-94d8-f98e7eab9355" (UID: "61503865-77a4-44b7-94d8-f98e7eab9355"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:04:15.161209 kubelet[2723]: I0514 18:04:15.160466 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb2f2220-829f-4115-b377-883fb2088506-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bb2f2220-829f-4115-b377-883fb2088506" (UID: "bb2f2220-829f-4115-b377-883fb2088506"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 18:04:15.252718 kubelet[2723]: I0514 18:04:15.252660 2723 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4w94b\" (UniqueName: \"kubernetes.io/projected/61503865-77a4-44b7-94d8-f98e7eab9355-kube-api-access-4w94b\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.252718 kubelet[2723]: I0514 18:04:15.252694 2723 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.252718 kubelet[2723]: I0514 18:04:15.252703 2723 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb2f2220-829f-4115-b377-883fb2088506-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.252718 kubelet[2723]: I0514 18:04:15.252713 2723 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.252718 kubelet[2723]: I0514 18:04:15.252721 2723 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb2f2220-829f-4115-b377-883fb2088506-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.252718 kubelet[2723]: I0514 18:04:15.252728 2723 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.252718 kubelet[2723]: I0514 18:04:15.252735 2723 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61503865-77a4-44b7-94d8-f98e7eab9355-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.252718 kubelet[2723]: I0514 18:04:15.252743 2723 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.253100 kubelet[2723]: I0514 18:04:15.252750 2723 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.253100 kubelet[2723]: I0514 18:04:15.252757 2723 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb2f2220-829f-4115-b377-883fb2088506-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.253100 kubelet[2723]: I0514 18:04:15.252768 2723 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4xl4g\" (UniqueName: \"kubernetes.io/projected/bb2f2220-829f-4115-b377-883fb2088506-kube-api-access-4xl4g\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.253100 kubelet[2723]: I0514 18:04:15.252775 2723 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb2f2220-829f-4115-b377-883fb2088506-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 18:04:15.368838 kubelet[2723]: I0514 18:04:15.368803 2723 scope.go:117] "RemoveContainer" containerID="fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030" May 14 18:04:15.372606 containerd[1592]: time="2025-05-14T18:04:15.372216752Z" level=info msg="RemoveContainer for \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\"" May 14 18:04:15.375673 systemd[1]: Removed slice kubepods-besteffort-pod61503865_77a4_44b7_94d8_f98e7eab9355.slice - libcontainer container kubepods-besteffort-pod61503865_77a4_44b7_94d8_f98e7eab9355.slice. May 14 18:04:15.378918 containerd[1592]: time="2025-05-14T18:04:15.378799298Z" level=info msg="RemoveContainer for \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\" returns successfully" May 14 18:04:15.379178 kubelet[2723]: I0514 18:04:15.379097 2723 scope.go:117] "RemoveContainer" containerID="fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030" May 14 18:04:15.379342 containerd[1592]: time="2025-05-14T18:04:15.379305191Z" level=error msg="ContainerStatus for \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\": not found" May 14 18:04:15.382488 systemd[1]: Removed slice kubepods-burstable-podbb2f2220_829f_4115_b377_883fb2088506.slice - libcontainer container kubepods-burstable-podbb2f2220_829f_4115_b377_883fb2088506.slice. May 14 18:04:15.382706 systemd[1]: kubepods-burstable-podbb2f2220_829f_4115_b377_883fb2088506.slice: Consumed 6.963s CPU time, 125.9M memory peak, 276K read from disk, 13.3M written to disk. May 14 18:04:15.383922 kubelet[2723]: E0514 18:04:15.383893 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\": not found" containerID="fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030" May 14 18:04:15.384003 kubelet[2723]: I0514 18:04:15.383938 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030"} err="failed to get container status \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc8eae17e16ecee344bed6d56f99161ce0e3f9c7d051dbe383643cad9803c030\": not found" May 14 18:04:15.384056 kubelet[2723]: I0514 18:04:15.384005 2723 scope.go:117] "RemoveContainer" containerID="4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0" May 14 18:04:15.385452 containerd[1592]: time="2025-05-14T18:04:15.385425227Z" level=info msg="RemoveContainer for \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\"" May 14 18:04:15.391444 containerd[1592]: time="2025-05-14T18:04:15.391414984Z" level=info msg="RemoveContainer for \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" returns successfully" May 14 18:04:15.391645 kubelet[2723]: I0514 18:04:15.391616 2723 scope.go:117] "RemoveContainer" containerID="5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478" May 14 18:04:15.393555 containerd[1592]: time="2025-05-14T18:04:15.393360468Z" level=info msg="RemoveContainer for \"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\"" May 14 18:04:15.401565 containerd[1592]: time="2025-05-14T18:04:15.401451748Z" level=info msg="RemoveContainer for \"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\" returns successfully" May 14 18:04:15.401771 kubelet[2723]: I0514 18:04:15.401681 2723 scope.go:117] "RemoveContainer" containerID="33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41" May 14 18:04:15.403906 containerd[1592]: time="2025-05-14T18:04:15.403871105Z" level=info msg="RemoveContainer for \"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\"" May 14 18:04:15.408487 containerd[1592]: time="2025-05-14T18:04:15.408455927Z" level=info msg="RemoveContainer for \"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\" returns successfully" May 14 18:04:15.408616 kubelet[2723]: I0514 18:04:15.408602 2723 scope.go:117] "RemoveContainer" containerID="d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544" May 14 18:04:15.410216 containerd[1592]: time="2025-05-14T18:04:15.409788354Z" level=info msg="RemoveContainer for \"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\"" May 14 18:04:15.413741 containerd[1592]: time="2025-05-14T18:04:15.413722586Z" level=info msg="RemoveContainer for \"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\" returns successfully" May 14 18:04:15.413934 kubelet[2723]: I0514 18:04:15.413853 2723 scope.go:117] "RemoveContainer" containerID="9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41" May 14 18:04:15.415792 containerd[1592]: time="2025-05-14T18:04:15.415198588Z" level=info msg="RemoveContainer for \"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\"" May 14 18:04:15.419570 containerd[1592]: time="2025-05-14T18:04:15.419536580Z" level=info msg="RemoveContainer for \"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\" returns successfully" May 14 18:04:15.419717 kubelet[2723]: I0514 18:04:15.419697 2723 scope.go:117] "RemoveContainer" containerID="4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0" May 14 18:04:15.419947 containerd[1592]: time="2025-05-14T18:04:15.419910992Z" level=error msg="ContainerStatus for \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\": not found" May 14 18:04:15.420073 kubelet[2723]: E0514 18:04:15.420045 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\": not found" containerID="4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0" May 14 18:04:15.420127 kubelet[2723]: I0514 18:04:15.420087 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0"} err="failed to get container status \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c05cedeb38538a503acfd3ff0b90bf44f71210f54d5686da49b9f8df74ae1e0\": not found" May 14 18:04:15.420127 kubelet[2723]: I0514 18:04:15.420121 2723 scope.go:117] "RemoveContainer" containerID="5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478" May 14 18:04:15.420316 containerd[1592]: time="2025-05-14T18:04:15.420281427Z" level=error msg="ContainerStatus for \"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\": not found" May 14 18:04:15.420407 kubelet[2723]: E0514 18:04:15.420378 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\": not found" containerID="5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478" May 14 18:04:15.420407 kubelet[2723]: I0514 18:04:15.420397 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478"} err="failed to get container status \"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\": rpc error: code = NotFound desc = an error occurred when try to find container \"5af5732ca209db5720ef2b73cf0704b1ecf8049af102abd96cabe07230e6e478\": not found" May 14 18:04:15.420500 kubelet[2723]: I0514 18:04:15.420412 2723 scope.go:117] "RemoveContainer" containerID="33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41" May 14 18:04:15.420592 containerd[1592]: time="2025-05-14T18:04:15.420553536Z" level=error msg="ContainerStatus for \"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\": not found" May 14 18:04:15.420697 kubelet[2723]: E0514 18:04:15.420676 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\": not found" containerID="33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41" May 14 18:04:15.420738 kubelet[2723]: I0514 18:04:15.420698 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41"} err="failed to get container status \"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\": rpc error: code = NotFound desc = an error occurred when try to find container \"33fb5c0e4e7ad3a91f8148a7de9323d67ee8a0f701bb2ddf86a08838f5fc0d41\": not found" May 14 18:04:15.420738 kubelet[2723]: I0514 18:04:15.420713 2723 scope.go:117] "RemoveContainer" containerID="d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544" May 14 18:04:15.420965 containerd[1592]: time="2025-05-14T18:04:15.420878494Z" level=error msg="ContainerStatus for \"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\": not found" May 14 18:04:15.421099 kubelet[2723]: E0514 18:04:15.421066 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\": not found" containerID="d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544" May 14 18:04:15.421147 kubelet[2723]: I0514 18:04:15.421104 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544"} err="failed to get container status \"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1b1be282b801228b6a7105079306332661a48ec0c0bdf9c28668da3582e2544\": not found" May 14 18:04:15.421147 kubelet[2723]: I0514 18:04:15.421125 2723 scope.go:117] "RemoveContainer" containerID="9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41" May 14 18:04:15.421310 containerd[1592]: time="2025-05-14T18:04:15.421258799Z" level=error msg="ContainerStatus for \"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\": not found" May 14 18:04:15.421374 kubelet[2723]: E0514 18:04:15.421353 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\": not found" containerID="9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41" May 14 18:04:15.421456 kubelet[2723]: I0514 18:04:15.421435 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41"} err="failed to get container status \"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a279a51505b57bcf56b4dc46fe0b47a58f38550c74d7459eac0e0ff25e80c41\": not found" May 14 18:04:15.797381 systemd[1]: var-lib-kubelet-pods-61503865\x2d77a4\x2d44b7\x2d94d8\x2df98e7eab9355-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4w94b.mount: Deactivated successfully. May 14 18:04:15.797486 systemd[1]: var-lib-kubelet-pods-bb2f2220\x2d829f\x2d4115\x2db377\x2d883fb2088506-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4xl4g.mount: Deactivated successfully. May 14 18:04:15.797572 systemd[1]: var-lib-kubelet-pods-bb2f2220\x2d829f\x2d4115\x2db377\x2d883fb2088506-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 18:04:15.797655 systemd[1]: var-lib-kubelet-pods-bb2f2220\x2d829f\x2d4115\x2db377\x2d883fb2088506-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 18:04:16.707299 sshd[4331]: Connection closed by 10.0.0.1 port 57872 May 14 18:04:16.707775 sshd-session[4329]: pam_unix(sshd:session): session closed for user core May 14 18:04:16.721854 systemd[1]: sshd@25-10.0.0.42:22-10.0.0.1:57872.service: Deactivated successfully. May 14 18:04:16.723891 systemd[1]: session-26.scope: Deactivated successfully. May 14 18:04:16.724676 systemd-logind[1568]: Session 26 logged out. Waiting for processes to exit. May 14 18:04:16.728345 systemd[1]: Started sshd@26-10.0.0.42:22-10.0.0.1:37374.service - OpenSSH per-connection server daemon (10.0.0.1:37374). May 14 18:04:16.729217 systemd-logind[1568]: Removed session 26. May 14 18:04:16.784905 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 37374 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:16.786453 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:16.791163 systemd-logind[1568]: New session 27 of user core. May 14 18:04:16.801680 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 18:04:17.023260 kubelet[2723]: E0514 18:04:17.023201 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:17.025894 kubelet[2723]: I0514 18:04:17.025869 2723 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61503865-77a4-44b7-94d8-f98e7eab9355" path="/var/lib/kubelet/pods/61503865-77a4-44b7-94d8-f98e7eab9355/volumes" May 14 18:04:17.026542 kubelet[2723]: I0514 18:04:17.026500 2723 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb2f2220-829f-4115-b377-883fb2088506" path="/var/lib/kubelet/pods/bb2f2220-829f-4115-b377-883fb2088506/volumes" May 14 18:04:17.510078 sshd[4488]: Connection closed by 10.0.0.1 port 37374 May 14 18:04:17.510401 sshd-session[4486]: pam_unix(sshd:session): session closed for user core May 14 18:04:17.525944 kubelet[2723]: E0514 18:04:17.524838 2723 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb2f2220-829f-4115-b377-883fb2088506" containerName="mount-cgroup" May 14 18:04:17.525944 kubelet[2723]: E0514 18:04:17.525367 2723 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb2f2220-829f-4115-b377-883fb2088506" containerName="mount-bpf-fs" May 14 18:04:17.525944 kubelet[2723]: E0514 18:04:17.525376 2723 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb2f2220-829f-4115-b377-883fb2088506" containerName="clean-cilium-state" May 14 18:04:17.525944 kubelet[2723]: E0514 18:04:17.525384 2723 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61503865-77a4-44b7-94d8-f98e7eab9355" containerName="cilium-operator" May 14 18:04:17.525944 kubelet[2723]: E0514 18:04:17.525392 2723 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb2f2220-829f-4115-b377-883fb2088506" containerName="apply-sysctl-overwrites" May 14 18:04:17.525944 kubelet[2723]: E0514 18:04:17.525397 2723 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb2f2220-829f-4115-b377-883fb2088506" containerName="cilium-agent" May 14 18:04:17.525944 kubelet[2723]: I0514 18:04:17.525423 2723 memory_manager.go:354] "RemoveStaleState removing state" podUID="61503865-77a4-44b7-94d8-f98e7eab9355" containerName="cilium-operator" May 14 18:04:17.525944 kubelet[2723]: I0514 18:04:17.525433 2723 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb2f2220-829f-4115-b377-883fb2088506" containerName="cilium-agent" May 14 18:04:17.528671 systemd[1]: sshd@26-10.0.0.42:22-10.0.0.1:37374.service: Deactivated successfully. May 14 18:04:17.532900 systemd[1]: session-27.scope: Deactivated successfully. May 14 18:04:17.534478 systemd-logind[1568]: Session 27 logged out. Waiting for processes to exit. May 14 18:04:17.538764 systemd-logind[1568]: Removed session 27. May 14 18:04:17.541201 systemd[1]: Started sshd@27-10.0.0.42:22-10.0.0.1:37390.service - OpenSSH per-connection server daemon (10.0.0.1:37390). May 14 18:04:17.553494 systemd[1]: Created slice kubepods-burstable-pod74a86598_ce10_43aa_b048_205f8d8cef21.slice - libcontainer container kubepods-burstable-pod74a86598_ce10_43aa_b048_205f8d8cef21.slice. May 14 18:04:17.591430 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 37390 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:17.593129 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:17.598224 systemd-logind[1568]: New session 28 of user core. May 14 18:04:17.607738 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 18:04:17.660995 sshd[4502]: Connection closed by 10.0.0.1 port 37390 May 14 18:04:17.661428 sshd-session[4500]: pam_unix(sshd:session): session closed for user core May 14 18:04:17.664790 kubelet[2723]: I0514 18:04:17.664757 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74a86598-ce10-43aa-b048-205f8d8cef21-cilium-cgroup\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.664852 kubelet[2723]: I0514 18:04:17.664797 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74a86598-ce10-43aa-b048-205f8d8cef21-cilium-config-path\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.664852 kubelet[2723]: I0514 18:04:17.664822 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx8l7\" (UniqueName: \"kubernetes.io/projected/74a86598-ce10-43aa-b048-205f8d8cef21-kube-api-access-dx8l7\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.664852 kubelet[2723]: I0514 18:04:17.664838 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74a86598-ce10-43aa-b048-205f8d8cef21-clustermesh-secrets\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.664938 kubelet[2723]: I0514 18:04:17.664914 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74a86598-ce10-43aa-b048-205f8d8cef21-lib-modules\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.664938 kubelet[2723]: I0514 18:04:17.664930 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74a86598-ce10-43aa-b048-205f8d8cef21-cilium-ipsec-secrets\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.664984 kubelet[2723]: I0514 18:04:17.664944 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74a86598-ce10-43aa-b048-205f8d8cef21-host-proc-sys-net\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.664984 kubelet[2723]: I0514 18:04:17.664961 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74a86598-ce10-43aa-b048-205f8d8cef21-host-proc-sys-kernel\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.665037 kubelet[2723]: I0514 18:04:17.664975 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74a86598-ce10-43aa-b048-205f8d8cef21-hubble-tls\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.665037 kubelet[2723]: I0514 18:04:17.664999 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74a86598-ce10-43aa-b048-205f8d8cef21-xtables-lock\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.665037 kubelet[2723]: I0514 18:04:17.665013 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74a86598-ce10-43aa-b048-205f8d8cef21-hostproc\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.665037 kubelet[2723]: I0514 18:04:17.665027 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74a86598-ce10-43aa-b048-205f8d8cef21-cilium-run\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.665125 kubelet[2723]: I0514 18:04:17.665041 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74a86598-ce10-43aa-b048-205f8d8cef21-bpf-maps\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.665172 kubelet[2723]: I0514 18:04:17.665131 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74a86598-ce10-43aa-b048-205f8d8cef21-cni-path\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.665211 kubelet[2723]: I0514 18:04:17.665192 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74a86598-ce10-43aa-b048-205f8d8cef21-etc-cni-netd\") pod \"cilium-25l47\" (UID: \"74a86598-ce10-43aa-b048-205f8d8cef21\") " pod="kube-system/cilium-25l47" May 14 18:04:17.674376 systemd[1]: sshd@27-10.0.0.42:22-10.0.0.1:37390.service: Deactivated successfully. May 14 18:04:17.676164 systemd[1]: session-28.scope: Deactivated successfully. May 14 18:04:17.677136 systemd-logind[1568]: Session 28 logged out. Waiting for processes to exit. May 14 18:04:17.680483 systemd[1]: Started sshd@28-10.0.0.42:22-10.0.0.1:37396.service - OpenSSH per-connection server daemon (10.0.0.1:37396). May 14 18:04:17.681257 systemd-logind[1568]: Removed session 28. May 14 18:04:17.727285 sshd[4509]: Accepted publickey for core from 10.0.0.1 port 37396 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:17.729455 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:17.734829 systemd-logind[1568]: New session 29 of user core. May 14 18:04:17.741678 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 18:04:17.862738 kubelet[2723]: E0514 18:04:17.862612 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:17.864123 containerd[1592]: time="2025-05-14T18:04:17.864063199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25l47,Uid:74a86598-ce10-43aa-b048-205f8d8cef21,Namespace:kube-system,Attempt:0,}" May 14 18:04:17.882740 containerd[1592]: time="2025-05-14T18:04:17.882620244Z" level=info msg="connecting to shim a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4" address="unix:///run/containerd/s/1155962dad09e5943979914507be0cd9016a5f2cee542847142acbd79c1cdcd3" namespace=k8s.io protocol=ttrpc version=3 May 14 18:04:17.910686 systemd[1]: Started cri-containerd-a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4.scope - libcontainer container a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4. May 14 18:04:17.936021 containerd[1592]: time="2025-05-14T18:04:17.935970302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25l47,Uid:74a86598-ce10-43aa-b048-205f8d8cef21,Namespace:kube-system,Attempt:0,} returns sandbox id \"a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4\"" May 14 18:04:17.936863 kubelet[2723]: E0514 18:04:17.936829 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:17.939068 containerd[1592]: time="2025-05-14T18:04:17.939033762Z" level=info msg="CreateContainer within sandbox \"a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:04:17.947109 containerd[1592]: time="2025-05-14T18:04:17.947058694Z" level=info msg="Container bbdbd90664de3ae69ed061b109f3b46fe062f9412470e06adc9e00c203ced284: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:17.955339 containerd[1592]: time="2025-05-14T18:04:17.955293835Z" level=info msg="CreateContainer within sandbox \"a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bbdbd90664de3ae69ed061b109f3b46fe062f9412470e06adc9e00c203ced284\"" May 14 18:04:17.956939 containerd[1592]: time="2025-05-14T18:04:17.955813975Z" level=info msg="StartContainer for \"bbdbd90664de3ae69ed061b109f3b46fe062f9412470e06adc9e00c203ced284\"" May 14 18:04:17.956939 containerd[1592]: time="2025-05-14T18:04:17.956680715Z" level=info msg="connecting to shim bbdbd90664de3ae69ed061b109f3b46fe062f9412470e06adc9e00c203ced284" address="unix:///run/containerd/s/1155962dad09e5943979914507be0cd9016a5f2cee542847142acbd79c1cdcd3" protocol=ttrpc version=3 May 14 18:04:17.984773 systemd[1]: Started cri-containerd-bbdbd90664de3ae69ed061b109f3b46fe062f9412470e06adc9e00c203ced284.scope - libcontainer container bbdbd90664de3ae69ed061b109f3b46fe062f9412470e06adc9e00c203ced284. May 14 18:04:18.016487 containerd[1592]: time="2025-05-14T18:04:18.016413791Z" level=info msg="StartContainer for \"bbdbd90664de3ae69ed061b109f3b46fe062f9412470e06adc9e00c203ced284\" returns successfully" May 14 18:04:18.027024 systemd[1]: cri-containerd-bbdbd90664de3ae69ed061b109f3b46fe062f9412470e06adc9e00c203ced284.scope: Deactivated successfully. May 14 18:04:18.028127 containerd[1592]: time="2025-05-14T18:04:18.028075736Z" level=info msg="received exit event container_id:\"bbdbd90664de3ae69ed061b109f3b46fe062f9412470e06adc9e00c203ced284\" id:\"bbdbd90664de3ae69ed061b109f3b46fe062f9412470e06adc9e00c203ced284\" pid:4580 exited_at:{seconds:1747245858 nanos:27713006}" May 14 18:04:18.028288 containerd[1592]: time="2025-05-14T18:04:18.028104671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbdbd90664de3ae69ed061b109f3b46fe062f9412470e06adc9e00c203ced284\" id:\"bbdbd90664de3ae69ed061b109f3b46fe062f9412470e06adc9e00c203ced284\" pid:4580 exited_at:{seconds:1747245858 nanos:27713006}" May 14 18:04:18.107089 kubelet[2723]: E0514 18:04:18.107045 2723 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 18:04:18.394201 kubelet[2723]: E0514 18:04:18.394163 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:18.396008 containerd[1592]: time="2025-05-14T18:04:18.395960559Z" level=info msg="CreateContainer within sandbox \"a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:04:18.404197 containerd[1592]: time="2025-05-14T18:04:18.404146292Z" level=info msg="Container a505c3eb223b33a38737613f40d48f9a1c84527deb8b7cde4b9a6a418eddb48f: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:18.411909 containerd[1592]: time="2025-05-14T18:04:18.411861690Z" level=info msg="CreateContainer within sandbox \"a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a505c3eb223b33a38737613f40d48f9a1c84527deb8b7cde4b9a6a418eddb48f\"" May 14 18:04:18.413341 containerd[1592]: time="2025-05-14T18:04:18.412380287Z" level=info msg="StartContainer for \"a505c3eb223b33a38737613f40d48f9a1c84527deb8b7cde4b9a6a418eddb48f\"" May 14 18:04:18.413494 containerd[1592]: time="2025-05-14T18:04:18.413444160Z" level=info msg="connecting to shim a505c3eb223b33a38737613f40d48f9a1c84527deb8b7cde4b9a6a418eddb48f" address="unix:///run/containerd/s/1155962dad09e5943979914507be0cd9016a5f2cee542847142acbd79c1cdcd3" protocol=ttrpc version=3 May 14 18:04:18.433661 systemd[1]: Started cri-containerd-a505c3eb223b33a38737613f40d48f9a1c84527deb8b7cde4b9a6a418eddb48f.scope - libcontainer container a505c3eb223b33a38737613f40d48f9a1c84527deb8b7cde4b9a6a418eddb48f. May 14 18:04:18.466277 containerd[1592]: time="2025-05-14T18:04:18.466220913Z" level=info msg="StartContainer for \"a505c3eb223b33a38737613f40d48f9a1c84527deb8b7cde4b9a6a418eddb48f\" returns successfully" May 14 18:04:18.473437 systemd[1]: cri-containerd-a505c3eb223b33a38737613f40d48f9a1c84527deb8b7cde4b9a6a418eddb48f.scope: Deactivated successfully. May 14 18:04:18.474150 containerd[1592]: time="2025-05-14T18:04:18.474097047Z" level=info msg="received exit event container_id:\"a505c3eb223b33a38737613f40d48f9a1c84527deb8b7cde4b9a6a418eddb48f\" id:\"a505c3eb223b33a38737613f40d48f9a1c84527deb8b7cde4b9a6a418eddb48f\" pid:4626 exited_at:{seconds:1747245858 nanos:473768181}" May 14 18:04:18.474749 containerd[1592]: time="2025-05-14T18:04:18.474548956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a505c3eb223b33a38737613f40d48f9a1c84527deb8b7cde4b9a6a418eddb48f\" id:\"a505c3eb223b33a38737613f40d48f9a1c84527deb8b7cde4b9a6a418eddb48f\" pid:4626 exited_at:{seconds:1747245858 nanos:473768181}" May 14 18:04:19.397863 kubelet[2723]: E0514 18:04:19.397830 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:19.400183 containerd[1592]: time="2025-05-14T18:04:19.400131539Z" level=info msg="CreateContainer within sandbox \"a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:04:19.411045 containerd[1592]: time="2025-05-14T18:04:19.410938762Z" level=info msg="Container 9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:19.421747 containerd[1592]: time="2025-05-14T18:04:19.421711028Z" level=info msg="CreateContainer within sandbox \"a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13\"" May 14 18:04:19.422322 containerd[1592]: time="2025-05-14T18:04:19.422193475Z" level=info msg="StartContainer for \"9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13\"" May 14 18:04:19.423854 containerd[1592]: time="2025-05-14T18:04:19.423829176Z" level=info msg="connecting to shim 9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13" address="unix:///run/containerd/s/1155962dad09e5943979914507be0cd9016a5f2cee542847142acbd79c1cdcd3" protocol=ttrpc version=3 May 14 18:04:19.446744 systemd[1]: Started cri-containerd-9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13.scope - libcontainer container 9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13. May 14 18:04:19.490673 systemd[1]: cri-containerd-9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13.scope: Deactivated successfully. May 14 18:04:19.491999 containerd[1592]: time="2025-05-14T18:04:19.491901504Z" level=info msg="StartContainer for \"9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13\" returns successfully" May 14 18:04:19.492721 containerd[1592]: time="2025-05-14T18:04:19.492666810Z" level=info msg="received exit event container_id:\"9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13\" id:\"9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13\" pid:4669 exited_at:{seconds:1747245859 nanos:492247312}" May 14 18:04:19.494031 containerd[1592]: time="2025-05-14T18:04:19.493990326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13\" id:\"9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13\" pid:4669 exited_at:{seconds:1747245859 nanos:492247312}" May 14 18:04:19.515496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fd9ccf2671030ef3465fd5163999fe5a30efa9bdec6186632a3a8704d419c13-rootfs.mount: Deactivated successfully. May 14 18:04:20.403607 kubelet[2723]: E0514 18:04:20.403569 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:20.405638 containerd[1592]: time="2025-05-14T18:04:20.405164936Z" level=info msg="CreateContainer within sandbox \"a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:04:20.420329 containerd[1592]: time="2025-05-14T18:04:20.420278200Z" level=info msg="Container 30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:20.429496 containerd[1592]: time="2025-05-14T18:04:20.429450918Z" level=info msg="CreateContainer within sandbox \"a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a\"" May 14 18:04:20.430062 containerd[1592]: time="2025-05-14T18:04:20.430034046Z" level=info msg="StartContainer for \"30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a\"" May 14 18:04:20.430894 containerd[1592]: time="2025-05-14T18:04:20.430864586Z" level=info msg="connecting to shim 30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a" address="unix:///run/containerd/s/1155962dad09e5943979914507be0cd9016a5f2cee542847142acbd79c1cdcd3" protocol=ttrpc version=3 May 14 18:04:20.461743 systemd[1]: Started cri-containerd-30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a.scope - libcontainer container 30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a. May 14 18:04:20.492864 systemd[1]: cri-containerd-30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a.scope: Deactivated successfully. May 14 18:04:20.493371 containerd[1592]: time="2025-05-14T18:04:20.493326124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a\" id:\"30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a\" pid:4710 exited_at:{seconds:1747245860 nanos:493035251}" May 14 18:04:20.494067 containerd[1592]: time="2025-05-14T18:04:20.494040943Z" level=info msg="received exit event container_id:\"30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a\" id:\"30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a\" pid:4710 exited_at:{seconds:1747245860 nanos:493035251}" May 14 18:04:20.502219 containerd[1592]: time="2025-05-14T18:04:20.502176680Z" level=info msg="StartContainer for \"30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a\" returns successfully" May 14 18:04:20.514690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30161988cd21d7d314d7cd81e9ee642934eb0034ab69e42d78132f73402ef25a-rootfs.mount: Deactivated successfully. May 14 18:04:21.410987 kubelet[2723]: E0514 18:04:21.410954 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:21.413839 containerd[1592]: time="2025-05-14T18:04:21.413785556Z" level=info msg="CreateContainer within sandbox \"a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:04:21.507660 containerd[1592]: time="2025-05-14T18:04:21.507607851Z" level=info msg="Container 6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:21.530669 containerd[1592]: time="2025-05-14T18:04:21.530610087Z" level=info msg="CreateContainer within sandbox \"a56b12a8eb364301213fc89b6d2d17b2980f51675fa5808fb19613775a2ab8d4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea\"" May 14 18:04:21.532151 containerd[1592]: time="2025-05-14T18:04:21.531247218Z" level=info msg="StartContainer for \"6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea\"" May 14 18:04:21.533175 containerd[1592]: time="2025-05-14T18:04:21.533138102Z" level=info msg="connecting to shim 6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea" address="unix:///run/containerd/s/1155962dad09e5943979914507be0cd9016a5f2cee542847142acbd79c1cdcd3" protocol=ttrpc version=3 May 14 18:04:21.558664 systemd[1]: Started cri-containerd-6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea.scope - libcontainer container 6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea. May 14 18:04:21.597966 containerd[1592]: time="2025-05-14T18:04:21.597919542Z" level=info msg="StartContainer for \"6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea\" returns successfully" May 14 18:04:21.668736 containerd[1592]: time="2025-05-14T18:04:21.668632466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea\" id:\"549db243518fda98a2f95e8a0f4fb041270b57545477ff15824d34059eed2617\" pid:4777 exited_at:{seconds:1747245861 nanos:668351882}" May 14 18:04:22.042563 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 14 18:04:22.416669 kubelet[2723]: E0514 18:04:22.416517 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:22.452774 kubelet[2723]: I0514 18:04:22.452715 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-25l47" podStartSLOduration=5.452695536 podStartE2EDuration="5.452695536s" podCreationTimestamp="2025-05-14 18:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:04:22.451422527 +0000 UTC m=+89.538564019" watchObservedRunningTime="2025-05-14 18:04:22.452695536 +0000 UTC m=+89.539837028" May 14 18:04:23.863598 kubelet[2723]: E0514 18:04:23.863516 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:24.103033 containerd[1592]: time="2025-05-14T18:04:24.102963539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea\" id:\"1315ebe74b904bd9e63c8abfb785e4e5c2b7409da9896c2ad36124b245df63d5\" pid:4954 exit_status:1 exited_at:{seconds:1747245864 nanos:102318244}" May 14 18:04:25.237858 systemd-networkd[1492]: lxc_health: Link UP May 14 18:04:25.238262 systemd-networkd[1492]: lxc_health: Gained carrier May 14 18:04:25.865552 kubelet[2723]: E0514 18:04:25.864647 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:26.237055 containerd[1592]: time="2025-05-14T18:04:26.237008022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea\" id:\"776b14dad432fdb6a96d236387a865d59bef8c7a8cb7f93da63028841d059401\" pid:5309 exited_at:{seconds:1747245866 nanos:234720611}" May 14 18:04:26.424637 kubelet[2723]: E0514 18:04:26.424554 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:26.913755 systemd-networkd[1492]: lxc_health: Gained IPv6LL May 14 18:04:27.427055 kubelet[2723]: E0514 18:04:27.426955 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:28.352142 containerd[1592]: time="2025-05-14T18:04:28.352066341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea\" id:\"554a1e77bade351108889acf5324d5f4d9d94a2afedba9505f0b88af183364f1\" pid:5344 exited_at:{seconds:1747245868 nanos:351771472}" May 14 18:04:30.474644 containerd[1592]: time="2025-05-14T18:04:30.474584646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea\" id:\"f1c19a9a28dd0cb84435dc4bbaaa62e2b0c39ff983e39d4dc8f53b3405aa78e6\" pid:5377 exited_at:{seconds:1747245870 nanos:473870162}" May 14 18:04:32.583879 containerd[1592]: time="2025-05-14T18:04:32.583819516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea\" id:\"701845e392ba58d7c7a80b8ee853ae1fd4bdf354f79b9013ef1d88cf789f3f23\" pid:5401 exited_at:{seconds:1747245872 nanos:583309038}" May 14 18:04:34.673066 containerd[1592]: time="2025-05-14T18:04:34.673003007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d77fdd479fe99d58a99f4b25e523f1cfaa737504e5f1424b37a370f6716d8ea\" id:\"2c5699303e4576130bd20a8f99b4fa3400a72de23dc102a84308570321b9bfbe\" pid:5425 exited_at:{seconds:1747245874 nanos:672514261}" May 14 18:04:34.682727 sshd[4511]: Connection closed by 10.0.0.1 port 37396 May 14 18:04:34.683586 sshd-session[4509]: pam_unix(sshd:session): session closed for user core May 14 18:04:34.688313 systemd[1]: sshd@28-10.0.0.42:22-10.0.0.1:37396.service: Deactivated successfully. May 14 18:04:34.690703 systemd[1]: session-29.scope: Deactivated successfully. May 14 18:04:34.691754 systemd-logind[1568]: Session 29 logged out. Waiting for processes to exit. May 14 18:04:34.693249 systemd-logind[1568]: Removed session 29.