May 15 23:58:15.224528 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:19:35 -00 2025 May 15 23:58:15.224566 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 15 23:58:15.224581 kernel: BIOS-provided physical RAM map: May 15 23:58:15.224590 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 23:58:15.224598 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 23:58:15.224619 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 23:58:15.224629 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 15 23:58:15.224638 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 23:58:15.224665 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 15 23:58:15.224675 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 15 23:58:15.224688 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 15 23:58:15.224697 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 15 23:58:15.224706 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 15 23:58:15.224714 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 15 23:58:15.224725 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 15 23:58:15.224735 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 23:58:15.224748 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 15 23:58:15.224767 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 15 23:58:15.224777 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 15 23:58:15.224786 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 15 23:58:15.224796 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 15 23:58:15.224805 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 23:58:15.224815 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 15 23:58:15.224824 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 23:58:15.224833 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 15 23:58:15.224842 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 23:58:15.224852 kernel: NX (Execute Disable) protection: active May 15 23:58:15.224865 kernel: APIC: Static calls initialized May 15 23:58:15.224874 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 15 23:58:15.224884 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 15 23:58:15.224893 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 15 23:58:15.224902 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 15 23:58:15.224911 kernel: extended physical RAM map: May 15 23:58:15.224919 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 23:58:15.224928 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 23:58:15.224938 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 23:58:15.224946 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 15 23:58:15.224956 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 23:58:15.224968 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 15 23:58:15.224978 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 15 23:58:15.224991 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 15 23:58:15.225001 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 15 23:58:15.225010 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 15 23:58:15.225020 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 15 23:58:15.225029 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 15 23:58:15.225042 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 15 23:58:15.225052 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 15 23:58:15.225061 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 15 23:58:15.225071 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 15 23:58:15.225080 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 23:58:15.225090 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 15 23:58:15.225101 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 15 23:58:15.225113 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 15 23:58:15.225125 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 15 23:58:15.225140 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 15 23:58:15.225152 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 23:58:15.225182 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 15 23:58:15.225194 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 23:58:15.225206 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 15 23:58:15.225218 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 23:58:15.225230 kernel: efi: EFI v2.7 by EDK II May 15 23:58:15.225242 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 15 23:58:15.225251 kernel: random: crng init done May 15 23:58:15.225261 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 15 23:58:15.225271 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 15 23:58:15.225283 kernel: secureboot: Secure boot disabled May 15 23:58:15.225293 kernel: SMBIOS 2.8 present. May 15 23:58:15.225302 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 15 23:58:15.225312 kernel: Hypervisor detected: KVM May 15 23:58:15.225321 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 23:58:15.225331 kernel: kvm-clock: using sched offset of 3432569658 cycles May 15 23:58:15.225341 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 23:58:15.225351 kernel: tsc: Detected 2794.748 MHz processor May 15 23:58:15.225360 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 23:58:15.225371 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 23:58:15.225380 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 15 23:58:15.225393 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 15 23:58:15.225403 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 23:58:15.225412 kernel: Using GB pages for direct mapping May 15 23:58:15.225422 kernel: ACPI: Early table checksum verification disabled May 15 23:58:15.225431 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 15 23:58:15.225440 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 15 23:58:15.225450 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:58:15.225460 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:58:15.225469 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 15 23:58:15.225482 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:58:15.225492 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:58:15.225502 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:58:15.225512 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:58:15.225522 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 15 23:58:15.225531 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 15 23:58:15.225541 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 15 23:58:15.225551 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 15 23:58:15.225564 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 15 23:58:15.225573 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 15 23:58:15.225583 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 15 23:58:15.225592 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 15 23:58:15.225602 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 15 23:58:15.225611 kernel: No NUMA configuration found May 15 23:58:15.225620 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 15 23:58:15.225630 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 15 23:58:15.225640 kernel: Zone ranges: May 15 23:58:15.225649 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 23:58:15.225662 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 15 23:58:15.225672 kernel: Normal empty May 15 23:58:15.225679 kernel: Movable zone start for each node May 15 23:58:15.225686 kernel: Early memory node ranges May 15 23:58:15.225693 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 15 23:58:15.225700 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 15 23:58:15.225707 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 15 23:58:15.225714 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 15 23:58:15.225721 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 15 23:58:15.225731 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 15 23:58:15.225738 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 15 23:58:15.225745 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 15 23:58:15.225753 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 15 23:58:15.225770 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 23:58:15.225777 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 15 23:58:15.225793 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 15 23:58:15.225802 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 23:58:15.225810 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 15 23:58:15.225817 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 15 23:58:15.225825 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 15 23:58:15.225832 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 15 23:58:15.225842 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 15 23:58:15.225850 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 23:58:15.225857 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 23:58:15.225865 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 23:58:15.225872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 23:58:15.225882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 23:58:15.225890 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 23:58:15.225897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 23:58:15.225905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 23:58:15.225912 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 23:58:15.225920 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 23:58:15.225927 kernel: TSC deadline timer available May 15 23:58:15.225935 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 15 23:58:15.225942 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 23:58:15.225952 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 23:58:15.225960 kernel: kvm-guest: setup PV sched yield May 15 23:58:15.225967 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 15 23:58:15.225975 kernel: Booting paravirtualized kernel on KVM May 15 23:58:15.225983 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 23:58:15.225990 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 15 23:58:15.225998 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 15 23:58:15.226005 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 15 23:58:15.226013 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 23:58:15.226023 kernel: kvm-guest: PV spinlocks enabled May 15 23:58:15.226030 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 23:58:15.226039 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 15 23:58:15.226047 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 23:58:15.226055 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 23:58:15.226062 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 23:58:15.226070 kernel: Fallback order for Node 0: 0 May 15 23:58:15.226077 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 15 23:58:15.226087 kernel: Policy zone: DMA32 May 15 23:58:15.226095 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 23:58:15.226102 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2295K rwdata, 22752K rodata, 42988K init, 2204K bss, 175776K reserved, 0K cma-reserved) May 15 23:58:15.226110 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 23:58:15.226118 kernel: ftrace: allocating 37950 entries in 149 pages May 15 23:58:15.226125 kernel: ftrace: allocated 149 pages with 4 groups May 15 23:58:15.226132 kernel: Dynamic Preempt: voluntary May 15 23:58:15.226140 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 23:58:15.226148 kernel: rcu: RCU event tracing is enabled. May 15 23:58:15.226159 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 23:58:15.226187 kernel: Trampoline variant of Tasks RCU enabled. May 15 23:58:15.226198 kernel: Rude variant of Tasks RCU enabled. May 15 23:58:15.226208 kernel: Tracing variant of Tasks RCU enabled. May 15 23:58:15.226216 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 23:58:15.226223 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 23:58:15.226231 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 23:58:15.226239 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 23:58:15.226246 kernel: Console: colour dummy device 80x25 May 15 23:58:15.226254 kernel: printk: console [ttyS0] enabled May 15 23:58:15.226265 kernel: ACPI: Core revision 20230628 May 15 23:58:15.226273 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 23:58:15.226280 kernel: APIC: Switch to symmetric I/O mode setup May 15 23:58:15.226288 kernel: x2apic enabled May 15 23:58:15.226296 kernel: APIC: Switched APIC routing to: physical x2apic May 15 23:58:15.226304 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 23:58:15.226312 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 23:58:15.226319 kernel: kvm-guest: setup PV IPIs May 15 23:58:15.226341 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 23:58:15.226355 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 23:58:15.226365 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 15 23:58:15.226373 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 23:58:15.226381 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 23:58:15.226388 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 23:58:15.226396 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 23:58:15.226404 kernel: Spectre V2 : Mitigation: Retpolines May 15 23:58:15.226412 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 23:58:15.226419 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 23:58:15.226430 kernel: RETBleed: Mitigation: untrained return thunk May 15 23:58:15.226437 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 23:58:15.226445 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 23:58:15.226453 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 23:58:15.226462 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 23:58:15.226469 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 23:58:15.226477 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 23:58:15.226485 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 23:58:15.226496 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 23:58:15.226503 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 23:58:15.226511 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 15 23:58:15.226519 kernel: Freeing SMP alternatives memory: 32K May 15 23:58:15.226527 kernel: pid_max: default: 32768 minimum: 301 May 15 23:58:15.226535 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 23:58:15.226542 kernel: landlock: Up and running. May 15 23:58:15.226550 kernel: SELinux: Initializing. May 15 23:58:15.226558 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:58:15.226568 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:58:15.226576 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 23:58:15.226584 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:58:15.226591 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:58:15.226599 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:58:15.226607 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 23:58:15.226615 kernel: ... version: 0 May 15 23:58:15.226623 kernel: ... bit width: 48 May 15 23:58:15.226630 kernel: ... generic registers: 6 May 15 23:58:15.226651 kernel: ... value mask: 0000ffffffffffff May 15 23:58:15.226672 kernel: ... max period: 00007fffffffffff May 15 23:58:15.226692 kernel: ... fixed-purpose events: 0 May 15 23:58:15.226712 kernel: ... event mask: 000000000000003f May 15 23:58:15.226721 kernel: signal: max sigframe size: 1776 May 15 23:58:15.226729 kernel: rcu: Hierarchical SRCU implementation. May 15 23:58:15.226737 kernel: rcu: Max phase no-delay instances is 400. May 15 23:58:15.227967 kernel: smp: Bringing up secondary CPUs ... May 15 23:58:15.227979 kernel: smpboot: x86: Booting SMP configuration: May 15 23:58:15.227996 kernel: .... node #0, CPUs: #1 #2 #3 May 15 23:58:15.228006 kernel: smp: Brought up 1 node, 4 CPUs May 15 23:58:15.228016 kernel: smpboot: Max logical packages: 1 May 15 23:58:15.228026 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 15 23:58:15.228037 kernel: devtmpfs: initialized May 15 23:58:15.228047 kernel: x86/mm: Memory block size: 128MB May 15 23:58:15.228057 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 15 23:58:15.228068 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 15 23:58:15.228079 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 15 23:58:15.228094 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 15 23:58:15.228104 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 15 23:58:15.228115 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 15 23:58:15.228126 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 23:58:15.228136 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 23:58:15.228146 kernel: pinctrl core: initialized pinctrl subsystem May 15 23:58:15.228157 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 23:58:15.228182 kernel: audit: initializing netlink subsys (disabled) May 15 23:58:15.228202 kernel: audit: type=2000 audit(1747353493.831:1): state=initialized audit_enabled=0 res=1 May 15 23:58:15.228217 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 23:58:15.228228 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 23:58:15.228238 kernel: cpuidle: using governor menu May 15 23:58:15.228248 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 23:58:15.228259 kernel: dca service started, version 1.12.1 May 15 23:58:15.228270 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 15 23:58:15.228280 kernel: PCI: Using configuration type 1 for base access May 15 23:58:15.228291 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 23:58:15.228302 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 23:58:15.228317 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 23:58:15.228327 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 23:58:15.228338 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 23:58:15.228349 kernel: ACPI: Added _OSI(Module Device) May 15 23:58:15.228360 kernel: ACPI: Added _OSI(Processor Device) May 15 23:58:15.228370 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 23:58:15.228380 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 23:58:15.228391 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 23:58:15.228402 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 15 23:58:15.228415 kernel: ACPI: Interpreter enabled May 15 23:58:15.228426 kernel: ACPI: PM: (supports S0 S3 S5) May 15 23:58:15.228436 kernel: ACPI: Using IOAPIC for interrupt routing May 15 23:58:15.228447 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 23:58:15.228457 kernel: PCI: Using E820 reservations for host bridge windows May 15 23:58:15.228468 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 23:58:15.228479 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 23:58:15.228720 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 23:58:15.228907 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 23:58:15.229080 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 23:58:15.229097 kernel: PCI host bridge to bus 0000:00 May 15 23:58:15.229280 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 23:58:15.229426 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 23:58:15.229570 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 23:58:15.229713 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 15 23:58:15.229909 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 15 23:58:15.240032 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 15 23:58:15.240233 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 23:58:15.240452 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 23:58:15.240672 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 23:58:15.240846 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 15 23:58:15.241010 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 15 23:58:15.241205 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 15 23:58:15.241384 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 15 23:58:15.241543 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 23:58:15.241715 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 15 23:58:15.241885 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 15 23:58:15.242044 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 15 23:58:15.242227 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 15 23:58:15.242401 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 15 23:58:15.242557 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 15 23:58:15.243875 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 15 23:58:15.244030 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 15 23:58:15.244264 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 23:58:15.244442 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 15 23:58:15.244608 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 15 23:58:15.244777 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 15 23:58:15.244920 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 15 23:58:15.245060 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 23:58:15.245223 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 23:58:15.245358 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 23:58:15.245500 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 15 23:58:15.245644 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 15 23:58:15.245787 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 23:58:15.245910 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 15 23:58:15.245921 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 23:58:15.245929 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 23:58:15.245938 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 23:58:15.245946 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 23:58:15.245958 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 23:58:15.245966 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 23:58:15.245974 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 23:58:15.245982 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 23:58:15.245990 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 23:58:15.245998 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 23:58:15.246006 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 23:58:15.246014 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 23:58:15.246022 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 23:58:15.246033 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 23:58:15.246041 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 23:58:15.246049 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 23:58:15.246056 kernel: iommu: Default domain type: Translated May 15 23:58:15.246064 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 23:58:15.246072 kernel: efivars: Registered efivars operations May 15 23:58:15.246080 kernel: PCI: Using ACPI for IRQ routing May 15 23:58:15.246088 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 23:58:15.246096 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 15 23:58:15.246106 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 15 23:58:15.246114 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 15 23:58:15.246122 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 15 23:58:15.246132 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 15 23:58:15.246143 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 15 23:58:15.246153 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 15 23:58:15.246230 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 15 23:58:15.246359 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 23:58:15.246484 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 23:58:15.246609 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 23:58:15.246620 kernel: vgaarb: loaded May 15 23:58:15.246628 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 23:58:15.246636 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 23:58:15.246645 kernel: clocksource: Switched to clocksource kvm-clock May 15 23:58:15.246652 kernel: VFS: Disk quotas dquot_6.6.0 May 15 23:58:15.246661 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 23:58:15.246668 kernel: pnp: PnP ACPI init May 15 23:58:15.246821 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 15 23:58:15.246833 kernel: pnp: PnP ACPI: found 6 devices May 15 23:58:15.246842 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 23:58:15.246849 kernel: NET: Registered PF_INET protocol family May 15 23:58:15.246858 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 23:58:15.246887 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 23:58:15.246898 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 23:58:15.246906 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 23:58:15.246917 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 23:58:15.246925 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 23:58:15.246933 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:58:15.246941 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:58:15.246949 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 23:58:15.246957 kernel: NET: Registered PF_XDP protocol family May 15 23:58:15.247087 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 15 23:58:15.247240 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 15 23:58:15.247372 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 23:58:15.247498 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 23:58:15.247611 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 23:58:15.247722 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 15 23:58:15.247847 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 15 23:58:15.247964 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 15 23:58:15.247976 kernel: PCI: CLS 0 bytes, default 64 May 15 23:58:15.247984 kernel: Initialise system trusted keyrings May 15 23:58:15.247992 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 23:58:15.248004 kernel: Key type asymmetric registered May 15 23:58:15.248012 kernel: Asymmetric key parser 'x509' registered May 15 23:58:15.248020 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 15 23:58:15.248028 kernel: io scheduler mq-deadline registered May 15 23:58:15.248037 kernel: io scheduler kyber registered May 15 23:58:15.248045 kernel: io scheduler bfq registered May 15 23:58:15.248053 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 23:58:15.248062 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 23:58:15.248070 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 23:58:15.248081 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 23:58:15.248092 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 23:58:15.248100 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 23:58:15.248108 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 23:58:15.248117 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 23:58:15.248125 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 23:58:15.248136 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 23:58:15.248301 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 23:58:15.248431 kernel: rtc_cmos 00:04: registered as rtc0 May 15 23:58:15.248561 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T23:58:14 UTC (1747353494) May 15 23:58:15.248675 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 15 23:58:15.248686 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 23:58:15.248694 kernel: efifb: probing for efifb May 15 23:58:15.248707 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 15 23:58:15.248718 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 15 23:58:15.248729 kernel: efifb: scrolling: redraw May 15 23:58:15.248739 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 15 23:58:15.248747 kernel: Console: switching to colour frame buffer device 160x50 May 15 23:58:15.248755 kernel: fb0: EFI VGA frame buffer device May 15 23:58:15.248774 kernel: pstore: Using crash dump compression: deflate May 15 23:58:15.248782 kernel: pstore: Registered efi_pstore as persistent store backend May 15 23:58:15.248790 kernel: NET: Registered PF_INET6 protocol family May 15 23:58:15.248798 kernel: Segment Routing with IPv6 May 15 23:58:15.248810 kernel: In-situ OAM (IOAM) with IPv6 May 15 23:58:15.248818 kernel: NET: Registered PF_PACKET protocol family May 15 23:58:15.248827 kernel: Key type dns_resolver registered May 15 23:58:15.248835 kernel: IPI shorthand broadcast: enabled May 15 23:58:15.248843 kernel: sched_clock: Marking stable (1774003082, 192862028)->(2130162246, -163297136) May 15 23:58:15.248852 kernel: registered taskstats version 1 May 15 23:58:15.248860 kernel: Loading compiled-in X.509 certificates May 15 23:58:15.248868 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 563478d245b598189519397611f5bddee97f3fc1' May 15 23:58:15.248876 kernel: Key type .fscrypt registered May 15 23:58:15.248886 kernel: Key type fscrypt-provisioning registered May 15 23:58:15.248895 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 23:58:15.248903 kernel: ima: Allocated hash algorithm: sha1 May 15 23:58:15.248911 kernel: ima: No architecture policies found May 15 23:58:15.248919 kernel: clk: Disabling unused clocks May 15 23:58:15.248927 kernel: Freeing unused kernel image (initmem) memory: 42988K May 15 23:58:15.248936 kernel: Write protecting the kernel read-only data: 36864k May 15 23:58:15.248944 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 15 23:58:15.248954 kernel: Run /init as init process May 15 23:58:15.248963 kernel: with arguments: May 15 23:58:15.248970 kernel: /init May 15 23:58:15.248978 kernel: with environment: May 15 23:58:15.248986 kernel: HOME=/ May 15 23:58:15.248994 kernel: TERM=linux May 15 23:58:15.249002 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 23:58:15.249017 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 23:58:15.249031 systemd[1]: Detected virtualization kvm. May 15 23:58:15.249040 systemd[1]: Detected architecture x86-64. May 15 23:58:15.249048 systemd[1]: Running in initrd. May 15 23:58:15.249056 systemd[1]: No hostname configured, using default hostname. May 15 23:58:15.249065 systemd[1]: Hostname set to . May 15 23:58:15.249073 systemd[1]: Initializing machine ID from VM UUID. May 15 23:58:15.249082 systemd[1]: Queued start job for default target initrd.target. May 15 23:58:15.249091 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:58:15.249102 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:58:15.249111 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 23:58:15.249120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:58:15.249131 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 23:58:15.249142 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 23:58:15.249153 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 23:58:15.249176 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 23:58:15.249188 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:58:15.249197 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:58:15.249205 systemd[1]: Reached target paths.target - Path Units. May 15 23:58:15.249214 systemd[1]: Reached target slices.target - Slice Units. May 15 23:58:15.249222 systemd[1]: Reached target swap.target - Swaps. May 15 23:58:15.249230 systemd[1]: Reached target timers.target - Timer Units. May 15 23:58:15.249238 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:58:15.249247 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:58:15.249255 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 23:58:15.249266 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 23:58:15.249275 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:58:15.249283 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:58:15.249292 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:58:15.249300 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:58:15.249308 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 23:58:15.249317 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:58:15.249325 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 23:58:15.249336 systemd[1]: Starting systemd-fsck-usr.service... May 15 23:58:15.249344 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:58:15.249353 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:58:15.249361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:58:15.249370 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 23:58:15.249378 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:58:15.249386 systemd[1]: Finished systemd-fsck-usr.service. May 15 23:58:15.249418 systemd-journald[195]: Collecting audit messages is disabled. May 15 23:58:15.249439 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:58:15.249451 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:58:15.249460 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:58:15.249469 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:58:15.249478 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:58:15.249486 systemd-journald[195]: Journal started May 15 23:58:15.249505 systemd-journald[195]: Runtime Journal (/run/log/journal/d46c9e9839e043739affb90c6d023686) is 6.0M, max 48.3M, 42.2M free. May 15 23:58:15.224718 systemd-modules-load[196]: Inserted module 'overlay' May 15 23:58:15.252770 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:58:15.253540 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:58:15.256039 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:58:15.264205 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 23:58:15.267197 kernel: Bridge firewalling registered May 15 23:58:15.267203 systemd-modules-load[196]: Inserted module 'br_netfilter' May 15 23:58:15.269025 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:58:15.271839 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:58:15.274686 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:58:15.292415 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 23:58:15.294513 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:58:15.308874 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:58:15.311469 dracut-cmdline[224]: dracut-dracut-053 May 15 23:58:15.315662 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 15 23:58:15.320774 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:58:15.357652 systemd-resolved[238]: Positive Trust Anchors: May 15 23:58:15.357667 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:58:15.357698 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:58:15.369640 systemd-resolved[238]: Defaulting to hostname 'linux'. May 15 23:58:15.371737 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:58:15.371884 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:58:15.407203 kernel: SCSI subsystem initialized May 15 23:58:15.418217 kernel: Loading iSCSI transport class v2.0-870. May 15 23:58:15.431210 kernel: iscsi: registered transport (tcp) May 15 23:58:15.454491 kernel: iscsi: registered transport (qla4xxx) May 15 23:58:15.454581 kernel: QLogic iSCSI HBA Driver May 15 23:58:15.509845 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 23:58:15.519702 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 23:58:15.545223 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 23:58:15.545306 kernel: device-mapper: uevent: version 1.0.3 May 15 23:58:15.546353 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 23:58:15.594217 kernel: raid6: avx2x4 gen() 27971 MB/s May 15 23:58:15.610190 kernel: raid6: avx2x2 gen() 29972 MB/s May 15 23:58:15.627568 kernel: raid6: avx2x1 gen() 23727 MB/s May 15 23:58:15.627676 kernel: raid6: using algorithm avx2x2 gen() 29972 MB/s May 15 23:58:15.652211 kernel: raid6: .... xor() 18186 MB/s, rmw enabled May 15 23:58:15.652289 kernel: raid6: using avx2x2 recovery algorithm May 15 23:58:15.674203 kernel: xor: automatically using best checksumming function avx May 15 23:58:15.854209 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 23:58:15.872395 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 23:58:15.881502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:58:15.897223 systemd-udevd[415]: Using default interface naming scheme 'v255'. May 15 23:58:15.902262 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:58:15.916502 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 23:58:15.932417 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation May 15 23:58:15.973879 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:58:15.987370 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:58:16.064666 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:58:16.109205 kernel: cryptd: max_cpu_qlen set to 1000 May 15 23:58:16.112197 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 15 23:58:16.130490 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 23:58:16.131097 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 23:58:16.137880 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:58:16.147528 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 23:58:16.147559 kernel: GPT:9289727 != 19775487 May 15 23:58:16.147583 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 23:58:16.147597 kernel: GPT:9289727 != 19775487 May 15 23:58:16.147610 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 23:58:16.138106 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:58:16.155485 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:58:16.155511 kernel: AVX2 version of gcm_enc/dec engaged. May 15 23:58:16.147713 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:58:16.158111 kernel: libata version 3.00 loaded. May 15 23:58:16.150331 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:58:16.150910 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:58:16.161365 kernel: AES CTR mode by8 optimization enabled May 15 23:58:16.154605 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:58:16.167572 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:58:16.170507 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 23:58:16.176098 kernel: ahci 0000:00:1f.2: version 3.0 May 15 23:58:16.176394 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 23:58:16.179424 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 23:58:16.179767 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 23:58:16.187201 kernel: BTRFS: device fsid da1480a3-a7d8-4e12-bbe1-1257540eb9ae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (469) May 15 23:58:16.188190 kernel: scsi host0: ahci May 15 23:58:16.188542 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (478) May 15 23:58:16.197544 kernel: scsi host1: ahci May 15 23:58:16.201817 kernel: scsi host2: ahci May 15 23:58:16.202026 kernel: scsi host3: ahci May 15 23:58:16.203250 kernel: scsi host4: ahci May 15 23:58:16.205794 kernel: scsi host5: ahci May 15 23:58:16.206028 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 15 23:58:16.206046 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 15 23:58:16.208095 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 23:58:16.210979 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 15 23:58:16.211014 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 15 23:58:16.211033 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 15 23:58:16.211050 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 15 23:58:16.214297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:58:16.235099 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 23:58:16.241390 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 23:58:16.254117 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 23:58:16.259004 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:58:16.261289 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:58:16.262665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:58:16.265303 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:58:16.282424 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 23:58:16.284904 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:58:16.287840 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 23:58:16.304621 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 23:58:16.325702 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:58:16.531224 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 23:58:16.531309 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 23:58:16.532204 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 23:58:16.533214 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 23:58:16.534189 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 23:58:16.535194 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 23:58:16.535217 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 23:58:16.536277 kernel: ata3.00: applying bridge limits May 15 23:58:16.536348 kernel: ata3.00: configured for UDMA/100 May 15 23:58:16.538812 disk-uuid[552]: Primary Header is updated. May 15 23:58:16.538812 disk-uuid[552]: Secondary Entries is updated. May 15 23:58:16.538812 disk-uuid[552]: Secondary Header is updated. May 15 23:58:16.542693 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 23:58:16.542749 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:58:16.599805 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 23:58:16.600075 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 23:58:16.614203 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 23:58:17.574211 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:58:17.574888 disk-uuid[567]: The operation has completed successfully. May 15 23:58:17.607376 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 23:58:17.607521 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 23:58:17.637492 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 23:58:17.641087 sh[595]: Success May 15 23:58:17.797223 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 23:58:17.839719 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 23:58:17.854440 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 23:58:17.857430 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 23:58:17.871550 kernel: BTRFS info (device dm-0): first mount of filesystem da1480a3-a7d8-4e12-bbe1-1257540eb9ae May 15 23:58:17.871617 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 23:58:17.871633 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 23:58:17.872779 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 23:58:17.874230 kernel: BTRFS info (device dm-0): using free space tree May 15 23:58:17.880746 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 23:58:17.883499 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 23:58:17.898560 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 23:58:17.901738 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 23:58:17.914223 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:58:17.914279 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 23:58:17.914295 kernel: BTRFS info (device vda6): using free space tree May 15 23:58:17.919886 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:58:17.930064 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 23:58:17.932585 kernel: BTRFS info (device vda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:58:18.032912 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:58:18.050611 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:58:18.076025 systemd-networkd[773]: lo: Link UP May 15 23:58:18.076029 systemd-networkd[773]: lo: Gained carrier May 15 23:58:18.077938 systemd-networkd[773]: Enumeration completed May 15 23:58:18.078428 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:58:18.078432 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:58:18.080067 systemd-networkd[773]: eth0: Link UP May 15 23:58:18.080071 systemd-networkd[773]: eth0: Gained carrier May 15 23:58:18.080078 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:58:18.080485 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:58:18.083382 systemd[1]: Reached target network.target - Network. May 15 23:58:18.096670 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 23:58:18.109632 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 23:58:18.110321 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:58:18.284272 ignition[777]: Ignition 2.20.0 May 15 23:58:18.284290 ignition[777]: Stage: fetch-offline May 15 23:58:18.284350 ignition[777]: no configs at "/usr/lib/ignition/base.d" May 15 23:58:18.284363 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:58:18.284493 ignition[777]: parsed url from cmdline: "" May 15 23:58:18.284498 ignition[777]: no config URL provided May 15 23:58:18.284504 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" May 15 23:58:18.284517 ignition[777]: no config at "/usr/lib/ignition/user.ign" May 15 23:58:18.284557 ignition[777]: op(1): [started] loading QEMU firmware config module May 15 23:58:18.284564 ignition[777]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 23:58:18.298017 ignition[777]: op(1): [finished] loading QEMU firmware config module May 15 23:58:18.337265 ignition[777]: parsing config with SHA512: b03c3e57a4e204d20f63e112372962e0e9032b8ce52f6c175ea192301c97b1f29c506d5b7c0f265835e6159ba15e293c6326f9715e9574f4027b98108d279db9 May 15 23:58:18.342847 unknown[777]: fetched base config from "system" May 15 23:58:18.342863 unknown[777]: fetched user config from "qemu" May 15 23:58:18.354736 ignition[777]: fetch-offline: fetch-offline passed May 15 23:58:18.355882 ignition[777]: Ignition finished successfully May 15 23:58:18.359856 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:58:18.363072 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 23:58:18.396522 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 23:58:18.431911 ignition[788]: Ignition 2.20.0 May 15 23:58:18.431936 ignition[788]: Stage: kargs May 15 23:58:18.432125 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 15 23:58:18.432145 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:58:18.433137 ignition[788]: kargs: kargs passed May 15 23:58:18.433210 ignition[788]: Ignition finished successfully May 15 23:58:18.437937 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 23:58:18.447499 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 23:58:18.583052 ignition[796]: Ignition 2.20.0 May 15 23:58:18.583070 ignition[796]: Stage: disks May 15 23:58:18.583326 ignition[796]: no configs at "/usr/lib/ignition/base.d" May 15 23:58:18.583344 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:58:18.587370 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 23:58:18.584579 ignition[796]: disks: disks passed May 15 23:58:18.605504 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 23:58:18.584644 ignition[796]: Ignition finished successfully May 15 23:58:18.607897 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 23:58:18.610119 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:58:18.611407 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:58:18.613912 systemd[1]: Reached target basic.target - Basic System. May 15 23:58:18.627521 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 23:58:18.646198 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 23:58:18.730950 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 23:58:18.759407 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 23:58:18.878188 kernel: EXT4-fs (vda9): mounted filesystem 13a141f5-2ff0-46d9-bee3-974c86536128 r/w with ordered data mode. Quota mode: none. May 15 23:58:18.878514 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 23:58:18.879227 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 23:58:18.893516 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:58:18.895929 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 23:58:18.898496 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 23:58:18.898571 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 23:58:18.907144 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) May 15 23:58:18.898611 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:58:18.913617 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:58:18.913644 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 23:58:18.913680 kernel: BTRFS info (device vda6): using free space tree May 15 23:58:18.913695 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:58:18.915072 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:58:18.935435 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 23:58:18.939744 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 23:58:18.983584 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory May 15 23:58:18.991590 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory May 15 23:58:18.997560 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory May 15 23:58:19.002885 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory May 15 23:58:19.122410 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 23:58:19.142360 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 23:58:19.146137 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 23:58:19.150582 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 23:58:19.152369 kernel: BTRFS info (device vda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:58:19.182419 ignition[930]: INFO : Ignition 2.20.0 May 15 23:58:19.182419 ignition[930]: INFO : Stage: mount May 15 23:58:19.184499 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:58:19.184499 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:58:19.184499 ignition[930]: INFO : mount: mount passed May 15 23:58:19.184499 ignition[930]: INFO : Ignition finished successfully May 15 23:58:19.185505 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 23:58:19.186134 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 23:58:19.197317 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 23:58:19.311629 systemd-networkd[773]: eth0: Gained IPv6LL May 15 23:58:19.893480 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:58:19.902222 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) May 15 23:58:19.902314 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:58:19.904879 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 23:58:19.904927 kernel: BTRFS info (device vda6): using free space tree May 15 23:58:19.911217 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:58:19.912597 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:58:19.938689 ignition[961]: INFO : Ignition 2.20.0 May 15 23:58:19.938689 ignition[961]: INFO : Stage: files May 15 23:58:19.941023 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:58:19.941023 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:58:19.941023 ignition[961]: DEBUG : files: compiled without relabeling support, skipping May 15 23:58:19.941023 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 23:58:19.941023 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 23:58:19.950748 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 23:58:19.950748 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 23:58:19.950748 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 23:58:19.950748 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 23:58:19.950748 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 23:58:19.944911 unknown[961]: wrote ssh authorized keys file for user: core May 15 23:58:20.003061 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 23:58:20.204140 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 23:58:20.204140 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:58:20.209248 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 23:58:20.667801 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 23:58:20.908809 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 15 23:58:20.923307 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 15 23:58:21.439192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 23:58:22.103829 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 15 23:58:22.103829 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 23:58:22.117212 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:58:22.117212 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:58:22.117212 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 23:58:22.117212 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 23:58:22.117212 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:58:22.117212 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:58:22.117212 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 23:58:22.117212 ignition[961]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 23:58:22.226218 ignition[961]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:58:22.233118 ignition[961]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:58:22.235378 ignition[961]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 23:58:22.235378 ignition[961]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 23:58:22.238873 ignition[961]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 23:58:22.240517 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 23:58:22.242654 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 23:58:22.357046 ignition[961]: INFO : files: files passed May 15 23:58:22.358006 ignition[961]: INFO : Ignition finished successfully May 15 23:58:22.360994 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 23:58:22.369515 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 23:58:22.371871 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 23:58:22.376805 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 23:58:22.376985 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 23:58:22.442911 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory May 15 23:58:22.447825 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:58:22.447825 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 23:58:22.451329 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:58:22.455000 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:58:22.540489 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 23:58:22.559484 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 23:58:22.666076 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 23:58:22.666257 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 23:58:22.697467 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 23:58:22.699591 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 23:58:22.702121 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 23:58:22.713520 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 23:58:22.755656 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:58:22.760890 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 23:58:22.835991 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 23:58:22.836211 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:58:22.896794 systemd[1]: Stopped target timers.target - Timer Units. May 15 23:58:22.897901 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 23:58:22.898064 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:58:22.903037 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 23:58:22.903202 systemd[1]: Stopped target basic.target - Basic System. May 15 23:58:22.906254 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 23:58:22.907229 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:58:22.907781 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 23:58:22.908159 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 23:58:22.908743 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:58:22.909128 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 23:58:22.909638 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 23:58:22.910029 systemd[1]: Stopped target swap.target - Swaps. May 15 23:58:22.910489 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 23:58:22.910663 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 23:58:22.965268 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 23:58:22.965480 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:58:22.965830 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 23:58:22.965990 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:58:22.972055 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 23:58:22.972273 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 23:58:23.007417 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 23:58:23.007717 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:58:23.010149 systemd[1]: Stopped target paths.target - Path Units. May 15 23:58:23.012206 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 23:58:23.016259 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:58:23.016520 systemd[1]: Stopped target slices.target - Slice Units. May 15 23:58:23.020410 systemd[1]: Stopped target sockets.target - Socket Units. May 15 23:58:23.051897 systemd[1]: iscsid.socket: Deactivated successfully. May 15 23:58:23.052062 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:58:23.053840 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 23:58:23.053973 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:58:23.054897 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 23:58:23.055075 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:58:23.058004 systemd[1]: ignition-files.service: Deactivated successfully. May 15 23:58:23.058188 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 23:58:23.076581 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 23:58:23.077960 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 23:58:23.104511 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 23:58:23.104801 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:58:23.105603 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 23:58:23.105746 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:58:23.142354 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 23:58:23.142545 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 23:58:23.184867 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 23:58:23.206715 ignition[1016]: INFO : Ignition 2.20.0 May 15 23:58:23.206715 ignition[1016]: INFO : Stage: umount May 15 23:58:23.232480 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:58:23.232480 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:58:23.232480 ignition[1016]: INFO : umount: umount passed May 15 23:58:23.232480 ignition[1016]: INFO : Ignition finished successfully May 15 23:58:23.239615 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 23:58:23.239798 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 23:58:23.244550 systemd[1]: Stopped target network.target - Network. May 15 23:58:23.244728 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 23:58:23.244854 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 23:58:23.277290 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 23:58:23.277380 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 23:58:23.279717 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 23:58:23.279768 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 23:58:23.282332 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 23:58:23.282390 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 23:58:23.285337 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 23:58:23.288068 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 23:58:23.299272 systemd-networkd[773]: eth0: DHCPv6 lease lost May 15 23:58:23.301987 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 23:58:23.302124 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 23:58:23.326986 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 23:58:23.327227 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 23:58:23.330705 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 23:58:23.330894 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 23:58:23.334845 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 23:58:23.334922 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 23:58:23.336604 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 23:58:23.336659 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 23:58:23.374365 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 23:58:23.375504 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 23:58:23.375595 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:58:23.377912 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:58:23.377966 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:58:23.379075 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 23:58:23.379122 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 23:58:23.381323 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 23:58:23.381374 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:58:23.383975 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:58:23.394563 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 23:58:23.394750 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 23:58:23.410287 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 23:58:23.410492 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:58:23.413291 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 23:58:23.413356 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 23:58:23.415331 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 23:58:23.415384 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:58:23.416401 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 23:58:23.416468 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 23:58:23.421925 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 23:58:23.421996 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 23:58:23.424801 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:58:23.424866 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:58:23.438378 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 23:58:23.439541 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 23:58:23.439613 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:58:23.442182 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 23:58:23.442242 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:58:23.443920 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 23:58:23.444010 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:58:23.447495 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:58:23.447579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:58:23.455609 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 23:58:23.455740 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 23:58:23.456023 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 23:58:23.460354 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 23:58:23.476634 systemd[1]: Switching root. May 15 23:58:23.509680 systemd-journald[195]: Journal stopped May 15 23:58:27.723387 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). May 15 23:58:27.723458 kernel: SELinux: policy capability network_peer_controls=1 May 15 23:58:27.723475 kernel: SELinux: policy capability open_perms=1 May 15 23:58:27.723487 kernel: SELinux: policy capability extended_socket_class=1 May 15 23:58:27.723498 kernel: SELinux: policy capability always_check_network=0 May 15 23:58:27.723509 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 23:58:27.723530 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 23:58:27.723541 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 23:58:27.723556 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 23:58:27.723568 kernel: audit: type=1403 audit(1747353506.332:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 23:58:27.723581 systemd[1]: Successfully loaded SELinux policy in 47.373ms. May 15 23:58:27.723604 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.158ms. May 15 23:58:27.723618 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 23:58:27.723630 systemd[1]: Detected virtualization kvm. May 15 23:58:27.723642 systemd[1]: Detected architecture x86-64. May 15 23:58:27.723654 systemd[1]: Detected first boot. May 15 23:58:27.723666 systemd[1]: Initializing machine ID from VM UUID. May 15 23:58:27.723683 zram_generator::config[1060]: No configuration found. May 15 23:58:27.723697 systemd[1]: Populated /etc with preset unit settings. May 15 23:58:27.723713 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 23:58:27.723726 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 23:58:27.723738 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 23:58:27.723750 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 23:58:27.723763 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 23:58:27.723774 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 23:58:27.723789 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 23:58:27.723801 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 23:58:27.723818 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 23:58:27.723830 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 23:58:27.723842 systemd[1]: Created slice user.slice - User and Session Slice. May 15 23:58:27.723854 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:58:27.723867 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:58:27.723879 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 23:58:27.723891 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 23:58:27.723906 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 23:58:27.723919 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:58:27.723931 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 23:58:27.723943 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:58:27.723956 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 23:58:27.723968 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 23:58:27.723980 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 23:58:27.723995 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 23:58:27.724007 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:58:27.724020 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:58:27.724033 systemd[1]: Reached target slices.target - Slice Units. May 15 23:58:27.724045 systemd[1]: Reached target swap.target - Swaps. May 15 23:58:27.724057 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 23:58:27.724069 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 23:58:27.724082 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:58:27.724094 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:58:27.724106 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:58:27.724120 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 23:58:27.724132 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 23:58:27.724144 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 23:58:27.724156 systemd[1]: Mounting media.mount - External Media Directory... May 15 23:58:27.724183 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:27.724213 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 23:58:27.724225 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 23:58:27.724238 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 23:58:27.724255 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 23:58:27.724268 systemd[1]: Reached target machines.target - Containers. May 15 23:58:27.724279 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 23:58:27.724292 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:58:27.724304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:58:27.724316 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 23:58:27.724328 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:58:27.724341 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:58:27.724353 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:58:27.724368 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 23:58:27.724379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:58:27.724392 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 23:58:27.724415 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 23:58:27.724428 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 23:58:27.724440 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 23:58:27.724452 systemd[1]: Stopped systemd-fsck-usr.service. May 15 23:58:27.724464 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:58:27.724495 systemd-journald[1123]: Collecting audit messages is disabled. May 15 23:58:27.724518 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:58:27.724531 systemd-journald[1123]: Journal started May 15 23:58:27.724553 systemd-journald[1123]: Runtime Journal (/run/log/journal/d46c9e9839e043739affb90c6d023686) is 6.0M, max 48.3M, 42.2M free. May 15 23:58:27.144187 systemd[1]: Queued start job for default target multi-user.target. May 15 23:58:27.166663 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 23:58:27.167246 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 23:58:27.729231 kernel: fuse: init (API version 7.39) May 15 23:58:27.734144 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 23:58:27.734216 kernel: loop: module loaded May 15 23:58:27.739219 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 23:58:27.782949 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:58:27.783052 systemd[1]: verity-setup.service: Deactivated successfully. May 15 23:58:27.783075 systemd[1]: Stopped verity-setup.service. May 15 23:58:27.789923 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:27.792872 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:58:27.793855 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 23:58:27.795244 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 23:58:27.796564 systemd[1]: Mounted media.mount - External Media Directory. May 15 23:58:27.797733 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 23:58:27.799086 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 23:58:27.800698 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 23:58:27.802480 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:58:27.804203 kernel: ACPI: bus type drm_connector registered May 15 23:58:27.805531 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 23:58:27.805799 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 23:58:27.807714 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:58:27.807934 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:58:27.809855 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:58:27.810034 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:58:27.811620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:58:27.811798 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:58:27.813711 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 23:58:27.813886 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 23:58:27.815442 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:58:27.815711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:58:27.817335 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:58:27.818844 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 23:58:27.821415 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 23:58:27.834309 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 23:58:27.847287 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 23:58:27.850068 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 23:58:27.851371 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 23:58:27.851409 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:58:27.854041 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 23:58:27.901912 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 23:58:27.904467 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 23:58:27.952317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:58:27.955080 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 23:58:27.958362 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 23:58:27.960112 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:58:27.961597 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 23:58:27.963224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:58:27.968137 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:58:27.971267 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 23:58:27.977124 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:58:28.013455 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:58:28.015426 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 23:58:28.017119 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 23:58:28.019032 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 23:58:28.029340 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 23:58:28.093962 systemd-journald[1123]: Time spent on flushing to /var/log/journal/d46c9e9839e043739affb90c6d023686 is 15.256ms for 1046 entries. May 15 23:58:28.093962 systemd-journald[1123]: System Journal (/var/log/journal/d46c9e9839e043739affb90c6d023686) is 8.0M, max 195.6M, 187.6M free. May 15 23:58:28.506811 systemd-journald[1123]: Received client request to flush runtime journal. May 15 23:58:28.506921 kernel: loop0: detected capacity change from 0 to 140992 May 15 23:58:28.506950 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 23:58:28.506970 kernel: loop1: detected capacity change from 0 to 138184 May 15 23:58:28.506989 kernel: loop2: detected capacity change from 0 to 224512 May 15 23:58:28.101327 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 23:58:28.147465 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 23:58:28.255071 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:58:28.256921 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 15 23:58:28.256936 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 15 23:58:28.264240 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:58:28.282427 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 23:58:28.318616 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 23:58:28.337490 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 23:58:28.350399 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 23:58:28.421139 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 23:58:28.434444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:58:28.464544 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. May 15 23:58:28.464562 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. May 15 23:58:28.470152 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:58:28.508708 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 23:58:28.535201 kernel: loop3: detected capacity change from 0 to 140992 May 15 23:58:28.636239 kernel: loop4: detected capacity change from 0 to 138184 May 15 23:58:28.650575 kernel: loop5: detected capacity change from 0 to 224512 May 15 23:58:28.657707 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 23:58:28.658319 (sd-merge)[1205]: Merged extensions into '/usr'. May 15 23:58:28.662685 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... May 15 23:58:28.662704 systemd[1]: Reloading... May 15 23:58:28.719154 zram_generator::config[1232]: No configuration found. May 15 23:58:28.847953 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:58:28.901608 systemd[1]: Reloading finished in 238 ms. May 15 23:58:28.933196 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 23:58:28.957522 systemd[1]: Starting ensure-sysext.service... May 15 23:58:28.960343 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:58:28.966489 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... May 15 23:58:28.966506 systemd[1]: Reloading... May 15 23:58:28.987492 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 23:58:28.987947 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 23:58:28.989763 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 23:58:28.990524 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 15 23:58:28.990625 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 15 23:58:28.995851 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:58:28.995974 systemd-tmpfiles[1269]: Skipping /boot May 15 23:58:29.017062 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:58:29.017079 systemd-tmpfiles[1269]: Skipping /boot May 15 23:58:29.024790 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 23:58:29.048310 zram_generator::config[1299]: No configuration found. May 15 23:58:29.162406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:58:29.221044 systemd[1]: Reloading finished in 254 ms. May 15 23:58:29.241077 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 23:58:29.242693 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 23:58:29.318228 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 23:58:29.327899 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:58:29.414657 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:58:29.432265 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 23:58:29.435248 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 23:58:29.443350 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:58:29.474864 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 23:58:29.480671 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:29.480907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:58:29.482690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:58:29.509618 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:58:29.512781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:58:29.514395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:58:29.521252 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 23:58:29.543909 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:29.545109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:58:29.545345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:58:29.547281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:58:29.547509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:58:29.592483 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 23:58:29.594510 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:58:29.594687 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:58:29.603133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:29.603405 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:58:29.610639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:58:29.685754 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:58:29.688212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:58:29.690690 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:58:29.690903 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:29.692484 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 23:58:29.739610 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:58:29.739824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:58:29.741780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:58:29.741977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:58:29.744415 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:58:29.744688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:58:29.755786 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:29.755995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:58:29.764032 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:58:29.766996 augenrules[1382]: No rules May 15 23:58:29.769659 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:58:29.772406 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:58:29.775421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:58:29.797510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:58:29.797775 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:29.799033 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 23:58:29.801147 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:58:29.801427 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:58:29.803120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:58:29.803377 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:58:29.805323 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:58:29.805542 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:58:29.807587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:58:29.807767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:58:29.809740 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:58:29.809977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:58:29.815214 systemd[1]: Finished ensure-sysext.service. May 15 23:58:29.833493 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:58:29.833601 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:58:29.840422 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 23:58:29.893119 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 23:58:29.895099 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 23:58:29.900059 systemd-resolved[1342]: Positive Trust Anchors: May 15 23:58:29.900080 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:58:29.900117 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:58:29.904244 systemd-resolved[1342]: Defaulting to hostname 'linux'. May 15 23:58:29.906211 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:58:29.918742 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 23:58:29.920198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:58:29.921424 systemd[1]: Reached target time-set.target - System Time Set. May 15 23:58:29.947946 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 23:58:29.962508 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:58:29.965700 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 23:58:29.986368 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 23:58:29.995320 systemd-udevd[1401]: Using default interface naming scheme 'v255'. May 15 23:58:30.018110 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:58:30.048441 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:58:30.050448 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 23:58:30.094195 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1405) May 15 23:58:30.124441 systemd-networkd[1423]: lo: Link UP May 15 23:58:30.124459 systemd-networkd[1423]: lo: Gained carrier May 15 23:58:30.129424 systemd-networkd[1423]: Enumeration completed May 15 23:58:30.129848 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:58:30.129860 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:58:30.130413 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:58:30.178964 systemd[1]: Reached target network.target - Network. May 15 23:58:30.182316 systemd-networkd[1423]: eth0: Link UP May 15 23:58:30.182330 systemd-networkd[1423]: eth0: Gained carrier May 15 23:58:30.182366 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:58:30.192561 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 23:58:31.356755 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 23:58:30.194250 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:58:30.194944 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. May 15 23:58:31.355941 systemd-timesyncd[1397]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 23:58:31.355999 systemd-timesyncd[1397]: Initial clock synchronization to Thu 2025-05-15 23:58:31.355776 UTC. May 15 23:58:31.356044 systemd-resolved[1342]: Clock change detected. Flushing caches. May 15 23:58:31.359735 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 23:58:31.363982 kernel: ACPI: button: Power Button [PWRF] May 15 23:58:31.361630 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:58:31.379087 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 23:58:31.385967 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 15 23:58:31.386383 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 23:58:31.386624 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 23:58:31.442359 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 23:58:31.386701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:58:31.448694 kernel: mousedev: PS/2 mouse device common for all mice May 15 23:58:31.460296 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 23:58:31.519299 kernel: kvm_amd: TSC scaling supported May 15 23:58:31.519406 kernel: kvm_amd: Nested Virtualization enabled May 15 23:58:31.519419 kernel: kvm_amd: Nested Paging enabled May 15 23:58:31.519950 kernel: kvm_amd: LBR virtualization supported May 15 23:58:31.521398 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 15 23:58:31.521433 kernel: kvm_amd: Virtual GIF supported May 15 23:58:31.543744 kernel: EDAC MC: Ver: 3.0.0 May 15 23:58:31.557760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:58:31.647399 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 23:58:31.659055 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 23:58:31.668739 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:58:31.700149 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 23:58:31.702212 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:58:31.703685 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:58:31.705208 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 23:58:31.706840 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 23:58:31.708725 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 23:58:31.710514 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 23:58:31.712320 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 23:58:31.714007 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 23:58:31.714056 systemd[1]: Reached target paths.target - Path Units. May 15 23:58:31.715299 systemd[1]: Reached target timers.target - Timer Units. May 15 23:58:31.717693 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 23:58:31.753355 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 23:58:31.763230 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 23:58:31.822043 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 23:58:31.824008 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 23:58:31.825413 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:58:31.826561 systemd[1]: Reached target basic.target - Basic System. May 15 23:58:31.827728 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 23:58:31.827763 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 23:58:31.829206 systemd[1]: Starting containerd.service - containerd container runtime... May 15 23:58:31.832474 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 23:58:31.835084 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:58:31.838015 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 23:58:31.843545 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 23:58:31.846834 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 23:58:31.849746 jq[1455]: false May 15 23:58:31.851004 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 23:58:31.854161 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 23:58:31.861198 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 23:58:31.864572 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 23:58:31.866542 extend-filesystems[1456]: Found loop3 May 15 23:58:31.867808 extend-filesystems[1456]: Found loop4 May 15 23:58:31.867808 extend-filesystems[1456]: Found loop5 May 15 23:58:31.867808 extend-filesystems[1456]: Found sr0 May 15 23:58:31.867808 extend-filesystems[1456]: Found vda May 15 23:58:31.867808 extend-filesystems[1456]: Found vda1 May 15 23:58:31.867808 extend-filesystems[1456]: Found vda2 May 15 23:58:31.867808 extend-filesystems[1456]: Found vda3 May 15 23:58:31.867808 extend-filesystems[1456]: Found usr May 15 23:58:31.867808 extend-filesystems[1456]: Found vda4 May 15 23:58:31.867808 extend-filesystems[1456]: Found vda6 May 15 23:58:31.867808 extend-filesystems[1456]: Found vda7 May 15 23:58:31.867808 extend-filesystems[1456]: Found vda9 May 15 23:58:31.867808 extend-filesystems[1456]: Checking size of /dev/vda9 May 15 23:58:31.881452 dbus-daemon[1454]: [system] SELinux support is enabled May 15 23:58:31.885029 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 23:58:31.886800 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 23:58:31.887467 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 23:58:31.888438 systemd[1]: Starting update-engine.service - Update Engine... May 15 23:58:31.891792 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 23:58:31.894602 extend-filesystems[1456]: Resized partition /dev/vda9 May 15 23:58:31.894649 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 23:58:31.899682 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 23:58:31.902124 jq[1472]: true May 15 23:58:31.903350 extend-filesystems[1476]: resize2fs 1.47.1 (20-May-2024) May 15 23:58:31.905164 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 23:58:31.905471 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 23:58:31.905919 systemd[1]: motdgen.service: Deactivated successfully. May 15 23:58:31.906174 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 23:58:31.911029 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 23:58:31.911308 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 23:58:31.919744 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1415) May 15 23:58:31.927757 update_engine[1470]: I20250515 23:58:31.926643 1470 main.cc:92] Flatcar Update Engine starting May 15 23:58:31.931316 jq[1479]: true May 15 23:58:31.932479 update_engine[1470]: I20250515 23:58:31.932420 1470 update_check_scheduler.cc:74] Next update check in 7m23s May 15 23:58:31.939151 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 23:58:31.941509 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 23:58:31.941546 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 23:58:31.943073 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 23:58:31.943098 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 23:58:31.946081 systemd[1]: Started update-engine.service - Update Engine. May 15 23:58:31.954164 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 23:58:32.054025 systemd-logind[1469]: Watching system buttons on /dev/input/event1 (Power Button) May 15 23:58:32.054053 systemd-logind[1469]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 23:58:32.054496 systemd-logind[1469]: New seat seat0. May 15 23:58:32.057010 systemd[1]: Started systemd-logind.service - User Login Management. May 15 23:58:32.142837 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 23:58:32.142958 sshd_keygen[1507]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 23:58:32.153443 tar[1478]: linux-amd64/LICENSE May 15 23:58:32.153738 tar[1478]: linux-amd64/helm May 15 23:58:32.173889 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 23:58:32.269954 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 23:58:32.279139 systemd[1]: issuegen.service: Deactivated successfully. May 15 23:58:32.279391 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 23:58:32.395976 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 23:58:32.525706 tar[1478]: linux-amd64/README.md May 15 23:58:32.534805 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 23:58:32.539424 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 23:58:32.543594 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 23:58:32.627912 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 23:58:32.629203 systemd[1]: Reached target getty.target - Login Prompts. May 15 23:58:32.791086 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 23:58:33.080952 systemd-networkd[1423]: eth0: Gained IPv6LL May 15 23:58:33.084461 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 23:58:33.179423 systemd[1]: Reached target network-online.target - Network is Online. May 15 23:58:33.189976 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 23:58:33.197846 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 23:58:33.198077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:58:33.201446 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 23:58:33.224126 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 23:58:33.224425 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 23:58:33.280773 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 23:58:33.827342 containerd[1480]: time="2025-05-15T23:58:33.827196871Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 23:58:33.838380 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 23:58:33.852778 extend-filesystems[1476]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 23:58:33.852778 extend-filesystems[1476]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 23:58:33.852778 extend-filesystems[1476]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 23:58:33.857823 extend-filesystems[1456]: Resized filesystem in /dev/vda9 May 15 23:58:33.854144 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 23:58:33.854476 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 23:58:33.915441 containerd[1480]: time="2025-05-15T23:58:33.915370410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 23:58:33.917502 containerd[1480]: time="2025-05-15T23:58:33.917465940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 23:58:33.917502 containerd[1480]: time="2025-05-15T23:58:33.917499533Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 23:58:33.917589 containerd[1480]: time="2025-05-15T23:58:33.917519461Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 23:58:33.917771 containerd[1480]: time="2025-05-15T23:58:33.917751095Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 23:58:33.917771 containerd[1480]: time="2025-05-15T23:58:33.917771233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 23:58:33.917906 containerd[1480]: time="2025-05-15T23:58:33.917880137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:58:33.917944 containerd[1480]: time="2025-05-15T23:58:33.917906597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 23:58:33.918200 containerd[1480]: time="2025-05-15T23:58:33.918159441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:58:33.918200 containerd[1480]: time="2025-05-15T23:58:33.918181673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 23:58:33.918200 containerd[1480]: time="2025-05-15T23:58:33.918198785Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:58:33.918200 containerd[1480]: time="2025-05-15T23:58:33.918210687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 23:58:33.918400 containerd[1480]: time="2025-05-15T23:58:33.918330682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 23:58:33.918677 containerd[1480]: time="2025-05-15T23:58:33.918638059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 23:58:33.918861 containerd[1480]: time="2025-05-15T23:58:33.918825190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:58:33.918861 containerd[1480]: time="2025-05-15T23:58:33.918851860Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 23:58:33.919014 containerd[1480]: time="2025-05-15T23:58:33.918973949Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 23:58:33.919070 containerd[1480]: time="2025-05-15T23:58:33.919054510Z" level=info msg="metadata content store policy set" policy=shared May 15 23:58:34.183484 bash[1506]: Updated "/home/core/.ssh/authorized_keys" May 15 23:58:34.185961 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 23:58:34.188775 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 23:58:34.204907 containerd[1480]: time="2025-05-15T23:58:34.204829312Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 23:58:34.205063 containerd[1480]: time="2025-05-15T23:58:34.204920202Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 23:58:34.205063 containerd[1480]: time="2025-05-15T23:58:34.204949197Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 23:58:34.205063 containerd[1480]: time="2025-05-15T23:58:34.204974895Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 23:58:34.205063 containerd[1480]: time="2025-05-15T23:58:34.204996175Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 23:58:34.205306 containerd[1480]: time="2025-05-15T23:58:34.205275228Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 23:58:34.205701 containerd[1480]: time="2025-05-15T23:58:34.205668917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 23:58:34.205935 containerd[1480]: time="2025-05-15T23:58:34.205902755Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 23:58:34.205935 containerd[1480]: time="2025-05-15T23:58:34.205930898Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 23:58:34.206000 containerd[1480]: time="2025-05-15T23:58:34.205950515Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 23:58:34.206000 containerd[1480]: time="2025-05-15T23:58:34.205968950Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 23:58:34.206000 containerd[1480]: time="2025-05-15T23:58:34.205985280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 23:58:34.206079 containerd[1480]: time="2025-05-15T23:58:34.206000869Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 23:58:34.206079 containerd[1480]: time="2025-05-15T23:58:34.206024133Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 23:58:34.206079 containerd[1480]: time="2025-05-15T23:58:34.206043269Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 23:58:34.206079 containerd[1480]: time="2025-05-15T23:58:34.206062495Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 23:58:34.206079 containerd[1480]: time="2025-05-15T23:58:34.206078024Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 23:58:34.206218 containerd[1480]: time="2025-05-15T23:58:34.206092371Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 23:58:34.206218 containerd[1480]: time="2025-05-15T23:58:34.206116897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206218 containerd[1480]: time="2025-05-15T23:58:34.206133919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206218 containerd[1480]: time="2025-05-15T23:58:34.206150109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206218 containerd[1480]: time="2025-05-15T23:58:34.206165087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206218 containerd[1480]: time="2025-05-15T23:58:34.206180176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206218 containerd[1480]: time="2025-05-15T23:58:34.206195464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206218 containerd[1480]: time="2025-05-15T23:58:34.206208860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206443 containerd[1480]: time="2025-05-15T23:58:34.206224088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206443 containerd[1480]: time="2025-05-15T23:58:34.206239226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206443 containerd[1480]: time="2025-05-15T23:58:34.206257571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206443 containerd[1480]: time="2025-05-15T23:58:34.206271687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206443 containerd[1480]: time="2025-05-15T23:58:34.206287066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206443 containerd[1480]: time="2025-05-15T23:58:34.206304479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206443 containerd[1480]: time="2025-05-15T23:58:34.206323855Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 23:58:34.206443 containerd[1480]: time="2025-05-15T23:58:34.206350004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206443 containerd[1480]: time="2025-05-15T23:58:34.206376795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206443 containerd[1480]: time="2025-05-15T23:58:34.206394287Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 23:58:34.206742 containerd[1480]: time="2025-05-15T23:58:34.206465120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 23:58:34.206742 containerd[1480]: time="2025-05-15T23:58:34.206490408Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 23:58:34.206742 containerd[1480]: time="2025-05-15T23:58:34.206505616Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 23:58:34.206742 containerd[1480]: time="2025-05-15T23:58:34.206523490Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 23:58:34.206742 containerd[1480]: time="2025-05-15T23:58:34.206537376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 23:58:34.206742 containerd[1480]: time="2025-05-15T23:58:34.206554818Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 23:58:34.206742 containerd[1480]: time="2025-05-15T23:58:34.206583953Z" level=info msg="NRI interface is disabled by configuration." May 15 23:58:34.206742 containerd[1480]: time="2025-05-15T23:58:34.206614661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 23:58:34.207152 containerd[1480]: time="2025-05-15T23:58:34.207083149Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 23:58:34.207152 containerd[1480]: time="2025-05-15T23:58:34.207137501Z" level=info msg="Connect containerd service" May 15 23:58:34.207359 containerd[1480]: time="2025-05-15T23:58:34.207177276Z" level=info msg="using legacy CRI server" May 15 23:58:34.207359 containerd[1480]: time="2025-05-15T23:58:34.207186513Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 23:58:34.208361 containerd[1480]: time="2025-05-15T23:58:34.207680239Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 23:58:34.208764 containerd[1480]: time="2025-05-15T23:58:34.208739857Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:58:34.209222 containerd[1480]: time="2025-05-15T23:58:34.208922239Z" level=info msg="Start subscribing containerd event" May 15 23:58:34.209222 containerd[1480]: time="2025-05-15T23:58:34.208983995Z" level=info msg="Start recovering state" May 15 23:58:34.209222 containerd[1480]: time="2025-05-15T23:58:34.209082951Z" level=info msg="Start event monitor" May 15 23:58:34.209222 containerd[1480]: time="2025-05-15T23:58:34.209113798Z" level=info msg="Start snapshots syncer" May 15 23:58:34.209222 containerd[1480]: time="2025-05-15T23:58:34.209129237Z" level=info msg="Start cni network conf syncer for default" May 15 23:58:34.209222 containerd[1480]: time="2025-05-15T23:58:34.209147492Z" level=info msg="Start streaming server" May 15 23:58:34.210207 containerd[1480]: time="2025-05-15T23:58:34.210140384Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 23:58:34.210337 containerd[1480]: time="2025-05-15T23:58:34.210263084Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 23:58:34.210372 containerd[1480]: time="2025-05-15T23:58:34.210350989Z" level=info msg="containerd successfully booted in 0.568581s" May 15 23:58:34.210517 systemd[1]: Started containerd.service - containerd container runtime. May 15 23:58:34.873700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:58:34.875829 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 23:58:34.877372 systemd[1]: Startup finished in 1.924s (kernel) + 11.389s (initrd) + 7.492s (userspace) = 20.806s. May 15 23:58:34.880592 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:58:35.373876 kubelet[1567]: E0515 23:58:35.373608 1567 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:58:35.378007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:58:35.378294 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:58:35.378793 systemd[1]: kubelet.service: Consumed 1.109s CPU time. May 15 23:58:41.520432 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 23:58:41.521873 systemd[1]: Started sshd@0-10.0.0.123:22-10.0.0.1:43852.service - OpenSSH per-connection server daemon (10.0.0.1:43852). May 15 23:58:41.602189 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 43852 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:41.604511 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:41.615855 systemd-logind[1469]: New session 1 of user core. May 15 23:58:41.617200 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 23:58:41.630934 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 23:58:41.644757 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 23:58:41.647826 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 23:58:41.656746 (systemd)[1585]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 23:58:41.789512 systemd[1585]: Queued start job for default target default.target. May 15 23:58:41.801124 systemd[1585]: Created slice app.slice - User Application Slice. May 15 23:58:41.801158 systemd[1585]: Reached target paths.target - Paths. May 15 23:58:41.801177 systemd[1585]: Reached target timers.target - Timers. May 15 23:58:41.802946 systemd[1585]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 23:58:41.817100 systemd[1585]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 23:58:41.817232 systemd[1585]: Reached target sockets.target - Sockets. May 15 23:58:41.817251 systemd[1585]: Reached target basic.target - Basic System. May 15 23:58:41.817290 systemd[1585]: Reached target default.target - Main User Target. May 15 23:58:41.817325 systemd[1585]: Startup finished in 153ms. May 15 23:58:41.817729 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 23:58:41.819494 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 23:58:41.883315 systemd[1]: Started sshd@1-10.0.0.123:22-10.0.0.1:43854.service - OpenSSH per-connection server daemon (10.0.0.1:43854). May 15 23:58:41.935833 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 43854 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:41.938443 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:41.950271 systemd-logind[1469]: New session 2 of user core. May 15 23:58:41.961187 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 23:58:42.022978 sshd[1598]: Connection closed by 10.0.0.1 port 43854 May 15 23:58:42.023779 sshd-session[1596]: pam_unix(sshd:session): session closed for user core May 15 23:58:42.043120 systemd[1]: sshd@1-10.0.0.123:22-10.0.0.1:43854.service: Deactivated successfully. May 15 23:58:42.045525 systemd[1]: session-2.scope: Deactivated successfully. May 15 23:58:42.047147 systemd-logind[1469]: Session 2 logged out. Waiting for processes to exit. May 15 23:58:42.055044 systemd[1]: Started sshd@2-10.0.0.123:22-10.0.0.1:43860.service - OpenSSH per-connection server daemon (10.0.0.1:43860). May 15 23:58:42.057006 systemd-logind[1469]: Removed session 2. May 15 23:58:42.097394 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 43860 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:42.100415 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:42.109426 systemd-logind[1469]: New session 3 of user core. May 15 23:58:42.119115 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 23:58:42.173987 sshd[1605]: Connection closed by 10.0.0.1 port 43860 May 15 23:58:42.174551 sshd-session[1603]: pam_unix(sshd:session): session closed for user core May 15 23:58:42.188121 systemd[1]: sshd@2-10.0.0.123:22-10.0.0.1:43860.service: Deactivated successfully. May 15 23:58:42.190769 systemd[1]: session-3.scope: Deactivated successfully. May 15 23:58:42.192886 systemd-logind[1469]: Session 3 logged out. Waiting for processes to exit. May 15 23:58:42.203315 systemd[1]: Started sshd@3-10.0.0.123:22-10.0.0.1:43862.service - OpenSSH per-connection server daemon (10.0.0.1:43862). May 15 23:58:42.204464 systemd-logind[1469]: Removed session 3. May 15 23:58:42.243379 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 43862 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:42.245534 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:42.251395 systemd-logind[1469]: New session 4 of user core. May 15 23:58:42.261051 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 23:58:42.323228 sshd[1612]: Connection closed by 10.0.0.1 port 43862 May 15 23:58:42.323989 sshd-session[1610]: pam_unix(sshd:session): session closed for user core May 15 23:58:42.337577 systemd[1]: sshd@3-10.0.0.123:22-10.0.0.1:43862.service: Deactivated successfully. May 15 23:58:42.339304 systemd[1]: session-4.scope: Deactivated successfully. May 15 23:58:42.340988 systemd-logind[1469]: Session 4 logged out. Waiting for processes to exit. May 15 23:58:42.358237 systemd[1]: Started sshd@4-10.0.0.123:22-10.0.0.1:43872.service - OpenSSH per-connection server daemon (10.0.0.1:43872). May 15 23:58:42.359653 systemd-logind[1469]: Removed session 4. May 15 23:58:42.398077 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 43872 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:42.400019 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:42.405594 systemd-logind[1469]: New session 5 of user core. May 15 23:58:42.415119 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 23:58:42.480482 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 23:58:42.481002 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:58:42.500957 sudo[1620]: pam_unix(sudo:session): session closed for user root May 15 23:58:42.503075 sshd[1619]: Connection closed by 10.0.0.1 port 43872 May 15 23:58:42.503670 sshd-session[1617]: pam_unix(sshd:session): session closed for user core May 15 23:58:42.518532 systemd[1]: sshd@4-10.0.0.123:22-10.0.0.1:43872.service: Deactivated successfully. May 15 23:58:42.520753 systemd[1]: session-5.scope: Deactivated successfully. May 15 23:58:42.522641 systemd-logind[1469]: Session 5 logged out. Waiting for processes to exit. May 15 23:58:42.532031 systemd[1]: Started sshd@5-10.0.0.123:22-10.0.0.1:43882.service - OpenSSH per-connection server daemon (10.0.0.1:43882). May 15 23:58:42.533292 systemd-logind[1469]: Removed session 5. May 15 23:58:42.572624 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 43882 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:42.574427 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:42.580680 systemd-logind[1469]: New session 6 of user core. May 15 23:58:42.592022 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 23:58:42.653066 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 23:58:42.653547 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:58:42.659231 sudo[1629]: pam_unix(sudo:session): session closed for user root May 15 23:58:42.667313 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 23:58:42.667812 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:58:42.696333 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:58:42.738280 augenrules[1651]: No rules May 15 23:58:42.740338 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:58:42.740666 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:58:42.742565 sudo[1628]: pam_unix(sudo:session): session closed for user root May 15 23:58:42.744677 sshd[1627]: Connection closed by 10.0.0.1 port 43882 May 15 23:58:42.745110 sshd-session[1625]: pam_unix(sshd:session): session closed for user core May 15 23:58:42.756113 systemd[1]: sshd@5-10.0.0.123:22-10.0.0.1:43882.service: Deactivated successfully. May 15 23:58:42.757970 systemd[1]: session-6.scope: Deactivated successfully. May 15 23:58:42.759793 systemd-logind[1469]: Session 6 logged out. Waiting for processes to exit. May 15 23:58:42.770487 systemd[1]: Started sshd@6-10.0.0.123:22-10.0.0.1:43886.service - OpenSSH per-connection server daemon (10.0.0.1:43886). May 15 23:58:42.772180 systemd-logind[1469]: Removed session 6. May 15 23:58:42.814117 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 43886 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:58:42.816609 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:42.822107 systemd-logind[1469]: New session 7 of user core. May 15 23:58:42.832085 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 23:58:42.891264 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 23:58:42.891626 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:58:43.517839 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 23:58:43.517840 (dockerd)[1683]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 23:58:44.462110 dockerd[1683]: time="2025-05-15T23:58:44.461985024Z" level=info msg="Starting up" May 15 23:58:45.361768 dockerd[1683]: time="2025-05-15T23:58:45.361578454Z" level=info msg="Loading containers: start." May 15 23:58:45.628588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 23:58:45.638396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:58:45.894693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:58:45.902622 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:58:46.284042 kubelet[1775]: E0515 23:58:46.283836 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:58:46.290933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:58:46.291209 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:58:47.510931 kernel: Initializing XFRM netlink socket May 15 23:58:47.720823 systemd-networkd[1423]: docker0: Link UP May 15 23:58:47.816511 dockerd[1683]: time="2025-05-15T23:58:47.815756100Z" level=info msg="Loading containers: done." May 15 23:58:47.907728 dockerd[1683]: time="2025-05-15T23:58:47.907605814Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 23:58:47.907973 dockerd[1683]: time="2025-05-15T23:58:47.907801741Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 15 23:58:47.908012 dockerd[1683]: time="2025-05-15T23:58:47.907987109Z" level=info msg="Daemon has completed initialization" May 15 23:58:48.100763 dockerd[1683]: time="2025-05-15T23:58:48.098637455Z" level=info msg="API listen on /run/docker.sock" May 15 23:58:48.099529 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 23:58:50.423167 containerd[1480]: time="2025-05-15T23:58:50.423039308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 15 23:58:51.267882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1868690529.mount: Deactivated successfully. May 15 23:58:53.754643 containerd[1480]: time="2025-05-15T23:58:53.754460783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:53.755679 containerd[1480]: time="2025-05-15T23:58:53.755623053Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 15 23:58:53.761043 containerd[1480]: time="2025-05-15T23:58:53.760867009Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:53.766243 containerd[1480]: time="2025-05-15T23:58:53.766062674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:53.768032 containerd[1480]: time="2025-05-15T23:58:53.767506292Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 3.344402152s" May 15 23:58:53.768032 containerd[1480]: time="2025-05-15T23:58:53.767557208Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 15 23:58:53.768618 containerd[1480]: time="2025-05-15T23:58:53.768555901Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 15 23:58:55.480021 containerd[1480]: time="2025-05-15T23:58:55.479931192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:55.481117 containerd[1480]: time="2025-05-15T23:58:55.480959040Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 15 23:58:55.482738 containerd[1480]: time="2025-05-15T23:58:55.482680308Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:55.486228 containerd[1480]: time="2025-05-15T23:58:55.486153473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:55.489733 containerd[1480]: time="2025-05-15T23:58:55.488622675Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.720026458s" May 15 23:58:55.489733 containerd[1480]: time="2025-05-15T23:58:55.488673931Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 15 23:58:55.490213 containerd[1480]: time="2025-05-15T23:58:55.490137175Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 15 23:58:56.536657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 23:58:56.554950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:58:56.805448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:58:56.812472 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:58:57.037771 kubelet[1962]: E0515 23:58:57.036778 1962 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:58:57.044474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:58:57.044960 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:58:57.991934 containerd[1480]: time="2025-05-15T23:58:57.990365890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:58.007333 containerd[1480]: time="2025-05-15T23:58:58.005040135Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 15 23:58:58.079032 containerd[1480]: time="2025-05-15T23:58:58.078398083Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:58.560728 containerd[1480]: time="2025-05-15T23:58:58.560625955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:58.563280 containerd[1480]: time="2025-05-15T23:58:58.562136478Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 3.071941725s" May 15 23:58:58.563280 containerd[1480]: time="2025-05-15T23:58:58.562177144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 15 23:58:58.563280 containerd[1480]: time="2025-05-15T23:58:58.562768964Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 15 23:59:00.505135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946875746.mount: Deactivated successfully. May 15 23:59:02.270192 containerd[1480]: time="2025-05-15T23:59:02.270089819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:02.272387 containerd[1480]: time="2025-05-15T23:59:02.272332575Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 15 23:59:02.274619 containerd[1480]: time="2025-05-15T23:59:02.274559352Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:02.277478 containerd[1480]: time="2025-05-15T23:59:02.277409198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:02.278223 containerd[1480]: time="2025-05-15T23:59:02.278132054Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 3.715333123s" May 15 23:59:02.278223 containerd[1480]: time="2025-05-15T23:59:02.278177920Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 15 23:59:02.278911 containerd[1480]: time="2025-05-15T23:59:02.278757136Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 23:59:03.397201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3344446941.mount: Deactivated successfully. May 15 23:59:05.667690 containerd[1480]: time="2025-05-15T23:59:05.667519365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:05.670063 containerd[1480]: time="2025-05-15T23:59:05.669295922Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 15 23:59:05.673129 containerd[1480]: time="2025-05-15T23:59:05.673004175Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:05.679103 containerd[1480]: time="2025-05-15T23:59:05.679017568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:05.680744 containerd[1480]: time="2025-05-15T23:59:05.680630027Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.401828603s" May 15 23:59:05.680744 containerd[1480]: time="2025-05-15T23:59:05.680722763Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 23:59:05.681542 containerd[1480]: time="2025-05-15T23:59:05.681472948Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 23:59:06.477303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3293721806.mount: Deactivated successfully. May 15 23:59:06.526449 containerd[1480]: time="2025-05-15T23:59:06.526354113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:06.527733 containerd[1480]: time="2025-05-15T23:59:06.527658285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 23:59:06.529964 containerd[1480]: time="2025-05-15T23:59:06.529872676Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:06.533162 containerd[1480]: time="2025-05-15T23:59:06.533037417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:06.533997 containerd[1480]: time="2025-05-15T23:59:06.533935263Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 852.410946ms" May 15 23:59:06.533997 containerd[1480]: time="2025-05-15T23:59:06.533984687Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 23:59:06.534550 containerd[1480]: time="2025-05-15T23:59:06.534503314Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 23:59:07.286647 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 23:59:07.297046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:59:07.479415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:59:07.486021 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:59:07.540957 kubelet[2045]: E0515 23:59:07.540664 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:59:07.546201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:59:07.546473 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:59:08.506190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount315128139.mount: Deactivated successfully. May 15 23:59:11.310092 containerd[1480]: time="2025-05-15T23:59:11.310023222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:11.310811 containerd[1480]: time="2025-05-15T23:59:11.310772114Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 15 23:59:11.312060 containerd[1480]: time="2025-05-15T23:59:11.312022360Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:11.315760 containerd[1480]: time="2025-05-15T23:59:11.315699652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:11.316872 containerd[1480]: time="2025-05-15T23:59:11.316812518Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.782265866s" May 15 23:59:11.316872 containerd[1480]: time="2025-05-15T23:59:11.316860843Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 23:59:13.442258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:59:13.457167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:59:13.486239 systemd[1]: Reloading requested from client PID 2136 ('systemctl') (unit session-7.scope)... May 15 23:59:13.486262 systemd[1]: Reloading... May 15 23:59:13.579753 zram_generator::config[2178]: No configuration found. May 15 23:59:14.184989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:59:14.296539 systemd[1]: Reloading finished in 809 ms. May 15 23:59:14.376602 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 23:59:14.376700 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 23:59:14.377067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:59:14.380818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:59:14.584484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:59:14.591065 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:59:14.655379 kubelet[2224]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:59:14.655379 kubelet[2224]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 23:59:14.655379 kubelet[2224]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:59:14.655888 kubelet[2224]: I0515 23:59:14.655438 2224 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:59:15.219094 kubelet[2224]: I0515 23:59:15.219025 2224 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 15 23:59:15.219094 kubelet[2224]: I0515 23:59:15.219070 2224 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:59:15.219387 kubelet[2224]: I0515 23:59:15.219356 2224 server.go:954] "Client rotation is on, will bootstrap in background" May 15 23:59:15.259383 kubelet[2224]: E0515 23:59:15.259327 2224 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:15.262098 kubelet[2224]: I0515 23:59:15.262059 2224 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:59:15.273862 kubelet[2224]: E0515 23:59:15.273805 2224 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 23:59:15.273862 kubelet[2224]: I0515 23:59:15.273848 2224 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 23:59:15.280036 kubelet[2224]: I0515 23:59:15.279979 2224 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:59:15.282009 kubelet[2224]: I0515 23:59:15.281938 2224 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:59:15.282161 kubelet[2224]: I0515 23:59:15.281984 2224 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:59:15.282161 kubelet[2224]: I0515 23:59:15.282161 2224 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:59:15.282390 kubelet[2224]: I0515 23:59:15.282172 2224 container_manager_linux.go:304] "Creating device plugin manager" May 15 23:59:15.282390 kubelet[2224]: I0515 23:59:15.282321 2224 state_mem.go:36] "Initialized new in-memory state store" May 15 23:59:15.285835 kubelet[2224]: I0515 23:59:15.285785 2224 kubelet.go:446] "Attempting to sync node with API server" May 15 23:59:15.285835 kubelet[2224]: I0515 23:59:15.285817 2224 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:59:15.285835 kubelet[2224]: I0515 23:59:15.285837 2224 kubelet.go:352] "Adding apiserver pod source" May 15 23:59:15.285835 kubelet[2224]: I0515 23:59:15.285850 2224 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:59:15.307219 kubelet[2224]: I0515 23:59:15.307101 2224 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 23:59:15.307585 kubelet[2224]: I0515 23:59:15.307550 2224 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:59:15.308937 kubelet[2224]: W0515 23:59:15.308687 2224 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 23:59:15.308937 kubelet[2224]: W0515 23:59:15.308754 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused May 15 23:59:15.308937 kubelet[2224]: E0515 23:59:15.308834 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:15.308937 kubelet[2224]: W0515 23:59:15.308890 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused May 15 23:59:15.308937 kubelet[2224]: E0515 23:59:15.308914 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:15.311377 kubelet[2224]: I0515 23:59:15.311307 2224 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 23:59:15.311377 kubelet[2224]: I0515 23:59:15.311354 2224 server.go:1287] "Started kubelet" May 15 23:59:15.312632 kubelet[2224]: I0515 23:59:15.312588 2224 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:59:15.313790 kubelet[2224]: I0515 23:59:15.313761 2224 server.go:479] "Adding debug handlers to kubelet server" May 15 23:59:15.314952 kubelet[2224]: I0515 23:59:15.314908 2224 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:59:15.316891 kubelet[2224]: I0515 23:59:15.315450 2224 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:59:15.316891 kubelet[2224]: I0515 23:59:15.315762 2224 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:59:15.316891 kubelet[2224]: I0515 23:59:15.316277 2224 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:59:15.317286 kubelet[2224]: E0515 23:59:15.317260 2224 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:15.317325 kubelet[2224]: I0515 23:59:15.317290 2224 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 23:59:15.318531 kubelet[2224]: I0515 23:59:15.318497 2224 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 15 23:59:15.318581 kubelet[2224]: I0515 23:59:15.318557 2224 reconciler.go:26] "Reconciler: start to sync state" May 15 23:59:15.319758 kubelet[2224]: W0515 23:59:15.319577 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused May 15 23:59:15.319758 kubelet[2224]: E0515 23:59:15.319649 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:15.319970 kubelet[2224]: E0515 23:59:15.319921 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="200ms" May 15 23:59:15.322080 kubelet[2224]: E0515 23:59:15.320865 2224 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:59:15.322080 kubelet[2224]: I0515 23:59:15.321010 2224 factory.go:221] Registration of the systemd container factory successfully May 15 23:59:15.322080 kubelet[2224]: I0515 23:59:15.321093 2224 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:59:15.326404 kubelet[2224]: E0515 23:59:15.322363 2224 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.123:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.123:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd8ccf9590160 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 23:59:15.311329632 +0000 UTC m=+0.709108607,LastTimestamp:2025-05-15 23:59:15.311329632 +0000 UTC m=+0.709108607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 23:59:15.327991 kubelet[2224]: I0515 23:59:15.326737 2224 factory.go:221] Registration of the containerd container factory successfully May 15 23:59:15.353053 kubelet[2224]: I0515 23:59:15.352916 2224 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 23:59:15.353053 kubelet[2224]: I0515 23:59:15.352952 2224 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 23:59:15.353053 kubelet[2224]: I0515 23:59:15.352978 2224 state_mem.go:36] "Initialized new in-memory state store" May 15 23:59:15.355747 kubelet[2224]: I0515 23:59:15.355664 2224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:59:15.358304 kubelet[2224]: I0515 23:59:15.358171 2224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:59:15.358304 kubelet[2224]: I0515 23:59:15.358225 2224 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 23:59:15.358304 kubelet[2224]: I0515 23:59:15.358257 2224 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 23:59:15.358304 kubelet[2224]: I0515 23:59:15.358267 2224 kubelet.go:2382] "Starting kubelet main sync loop" May 15 23:59:15.358444 kubelet[2224]: E0515 23:59:15.358361 2224 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:59:15.359161 kubelet[2224]: W0515 23:59:15.359046 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused May 15 23:59:15.359161 kubelet[2224]: E0515 23:59:15.359116 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:15.359771 kubelet[2224]: I0515 23:59:15.359433 2224 policy_none.go:49] "None policy: Start" May 15 23:59:15.359771 kubelet[2224]: I0515 23:59:15.359458 2224 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 23:59:15.359771 kubelet[2224]: I0515 23:59:15.359473 2224 state_mem.go:35] "Initializing new in-memory state store" May 15 23:59:15.368396 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 23:59:15.390756 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 23:59:15.394614 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 23:59:15.405532 kubelet[2224]: I0515 23:59:15.405413 2224 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:59:15.406682 kubelet[2224]: I0515 23:59:15.405789 2224 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:59:15.406682 kubelet[2224]: I0515 23:59:15.405809 2224 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:59:15.406682 kubelet[2224]: I0515 23:59:15.406320 2224 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:59:15.407290 kubelet[2224]: E0515 23:59:15.407248 2224 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 23:59:15.407360 kubelet[2224]: E0515 23:59:15.407328 2224 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 23:59:15.475531 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 15 23:59:15.489682 kubelet[2224]: E0515 23:59:15.489610 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:15.493590 systemd[1]: Created slice kubepods-burstable-pod1788fb037218d783b217e4cf7b71e88a.slice - libcontainer container kubepods-burstable-pod1788fb037218d783b217e4cf7b71e88a.slice. May 15 23:59:15.506819 kubelet[2224]: E0515 23:59:15.506727 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:15.507226 kubelet[2224]: I0515 23:59:15.507170 2224 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:59:15.507644 kubelet[2224]: E0515 23:59:15.507604 2224 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" May 15 23:59:15.510406 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 15 23:59:15.512967 kubelet[2224]: E0515 23:59:15.512890 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:15.520549 kubelet[2224]: I0515 23:59:15.520457 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:15.520549 kubelet[2224]: I0515 23:59:15.520523 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1788fb037218d783b217e4cf7b71e88a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1788fb037218d783b217e4cf7b71e88a\") " pod="kube-system/kube-apiserver-localhost" May 15 23:59:15.520549 kubelet[2224]: I0515 23:59:15.520550 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:15.520852 kubelet[2224]: I0515 23:59:15.520574 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:15.520852 kubelet[2224]: I0515 23:59:15.520600 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:15.520852 kubelet[2224]: I0515 23:59:15.520627 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 15 23:59:15.520852 kubelet[2224]: I0515 23:59:15.520648 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1788fb037218d783b217e4cf7b71e88a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1788fb037218d783b217e4cf7b71e88a\") " pod="kube-system/kube-apiserver-localhost" May 15 23:59:15.520852 kubelet[2224]: I0515 23:59:15.520670 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1788fb037218d783b217e4cf7b71e88a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1788fb037218d783b217e4cf7b71e88a\") " pod="kube-system/kube-apiserver-localhost" May 15 23:59:15.521097 kubelet[2224]: I0515 23:59:15.520697 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:15.521097 kubelet[2224]: E0515 23:59:15.521010 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="400ms" May 15 23:59:15.709537 kubelet[2224]: I0515 23:59:15.709497 2224 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:59:15.710069 kubelet[2224]: E0515 23:59:15.709888 2224 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" May 15 23:59:15.791342 kubelet[2224]: E0515 23:59:15.791262 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:15.792237 containerd[1480]: time="2025-05-15T23:59:15.792162589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 15 23:59:15.807688 kubelet[2224]: E0515 23:59:15.807564 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:15.808331 containerd[1480]: time="2025-05-15T23:59:15.808284851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1788fb037218d783b217e4cf7b71e88a,Namespace:kube-system,Attempt:0,}" May 15 23:59:15.813947 kubelet[2224]: E0515 23:59:15.813884 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:15.814530 containerd[1480]: time="2025-05-15T23:59:15.814474537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 15 23:59:15.922862 kubelet[2224]: E0515 23:59:15.922800 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="800ms" May 15 23:59:16.111335 kubelet[2224]: I0515 23:59:16.111196 2224 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:59:16.111584 kubelet[2224]: E0515 23:59:16.111542 2224 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" May 15 23:59:16.144204 kubelet[2224]: W0515 23:59:16.144076 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused May 15 23:59:16.144204 kubelet[2224]: E0515 23:59:16.144134 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:16.272920 kubelet[2224]: W0515 23:59:16.272826 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused May 15 23:59:16.272920 kubelet[2224]: E0515 23:59:16.272920 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:16.358578 kubelet[2224]: W0515 23:59:16.358524 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused May 15 23:59:16.358578 kubelet[2224]: E0515 23:59:16.358577 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:16.634612 kubelet[2224]: W0515 23:59:16.634546 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused May 15 23:59:16.634612 kubelet[2224]: E0515 23:59:16.634610 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:16.723904 kubelet[2224]: E0515 23:59:16.723831 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="1.6s" May 15 23:59:16.913969 kubelet[2224]: I0515 23:59:16.913773 2224 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:59:16.914294 kubelet[2224]: E0515 23:59:16.914250 2224 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" May 15 23:59:17.073155 update_engine[1470]: I20250515 23:59:17.073015 1470 update_attempter.cc:509] Updating boot flags... May 15 23:59:17.109760 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2265) May 15 23:59:17.196929 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2265) May 15 23:59:17.253778 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2265) May 15 23:59:17.436068 kubelet[2224]: E0515 23:59:17.435946 2224 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:17.563740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4058498209.mount: Deactivated successfully. May 15 23:59:17.573239 containerd[1480]: time="2025-05-15T23:59:17.573194363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:59:17.576578 containerd[1480]: time="2025-05-15T23:59:17.576549694Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 15 23:59:17.577949 containerd[1480]: time="2025-05-15T23:59:17.577896668Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:59:17.578964 containerd[1480]: time="2025-05-15T23:59:17.578936685Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:59:17.582272 containerd[1480]: time="2025-05-15T23:59:17.582232952Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 23:59:17.583855 containerd[1480]: time="2025-05-15T23:59:17.583818957Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:59:17.585160 containerd[1480]: time="2025-05-15T23:59:17.585113554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:59:17.586027 containerd[1480]: time="2025-05-15T23:59:17.585986209Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.793668466s" May 15 23:59:17.587200 containerd[1480]: time="2025-05-15T23:59:17.587132233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 23:59:17.589834 containerd[1480]: time="2025-05-15T23:59:17.589798510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.781419645s" May 15 23:59:17.594308 containerd[1480]: time="2025-05-15T23:59:17.594265708Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.779665987s" May 15 23:59:17.713116 containerd[1480]: time="2025-05-15T23:59:17.713017622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:59:17.713116 containerd[1480]: time="2025-05-15T23:59:17.713079320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:59:17.713116 containerd[1480]: time="2025-05-15T23:59:17.713098280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:17.713325 containerd[1480]: time="2025-05-15T23:59:17.713197969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:17.717657 containerd[1480]: time="2025-05-15T23:59:17.716555874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:59:17.717657 containerd[1480]: time="2025-05-15T23:59:17.716595877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:59:17.717657 containerd[1480]: time="2025-05-15T23:59:17.716606198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:17.717657 containerd[1480]: time="2025-05-15T23:59:17.716674002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:17.721179 containerd[1480]: time="2025-05-15T23:59:17.721047218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:59:17.721688 containerd[1480]: time="2025-05-15T23:59:17.721663772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:59:17.721798 containerd[1480]: time="2025-05-15T23:59:17.721775064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:17.722055 containerd[1480]: time="2025-05-15T23:59:17.722021454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:17.742888 systemd[1]: Started cri-containerd-12a3f14b5e706f26b8609a7188d2ff57f3c1abd4c90a9567f11fb10ee33fefb5.scope - libcontainer container 12a3f14b5e706f26b8609a7188d2ff57f3c1abd4c90a9567f11fb10ee33fefb5. May 15 23:59:17.745389 systemd[1]: Started cri-containerd-fdc5e245abcb16cec375eab703f4ebe29c95f0994f249f1fcefb3dd7e2b47f61.scope - libcontainer container fdc5e245abcb16cec375eab703f4ebe29c95f0994f249f1fcefb3dd7e2b47f61. May 15 23:59:17.749329 systemd[1]: Started cri-containerd-48ac15992d915a91c4141b4b8b35d5cccde0b8f7f42c6cfc67fb9cf25e84529c.scope - libcontainer container 48ac15992d915a91c4141b4b8b35d5cccde0b8f7f42c6cfc67fb9cf25e84529c. May 15 23:59:17.794265 containerd[1480]: time="2025-05-15T23:59:17.794207496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"12a3f14b5e706f26b8609a7188d2ff57f3c1abd4c90a9567f11fb10ee33fefb5\"" May 15 23:59:17.795725 kubelet[2224]: E0515 23:59:17.795299 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:17.798235 containerd[1480]: time="2025-05-15T23:59:17.798172415Z" level=info msg="CreateContainer within sandbox \"12a3f14b5e706f26b8609a7188d2ff57f3c1abd4c90a9567f11fb10ee33fefb5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 23:59:17.811795 containerd[1480]: time="2025-05-15T23:59:17.811700998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1788fb037218d783b217e4cf7b71e88a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdc5e245abcb16cec375eab703f4ebe29c95f0994f249f1fcefb3dd7e2b47f61\"" May 15 23:59:17.812222 containerd[1480]: time="2025-05-15T23:59:17.811869382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"48ac15992d915a91c4141b4b8b35d5cccde0b8f7f42c6cfc67fb9cf25e84529c\"" May 15 23:59:17.812856 kubelet[2224]: E0515 23:59:17.812830 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:17.812856 kubelet[2224]: E0515 23:59:17.812844 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:17.816098 containerd[1480]: time="2025-05-15T23:59:17.814902190Z" level=info msg="CreateContainer within sandbox \"fdc5e245abcb16cec375eab703f4ebe29c95f0994f249f1fcefb3dd7e2b47f61\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 23:59:17.816098 containerd[1480]: time="2025-05-15T23:59:17.814953236Z" level=info msg="CreateContainer within sandbox \"48ac15992d915a91c4141b4b8b35d5cccde0b8f7f42c6cfc67fb9cf25e84529c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 23:59:17.835693 containerd[1480]: time="2025-05-15T23:59:17.835618287Z" level=info msg="CreateContainer within sandbox \"12a3f14b5e706f26b8609a7188d2ff57f3c1abd4c90a9567f11fb10ee33fefb5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00\"" May 15 23:59:17.836524 containerd[1480]: time="2025-05-15T23:59:17.836480470Z" level=info msg="StartContainer for \"08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00\"" May 15 23:59:17.851682 containerd[1480]: time="2025-05-15T23:59:17.851621787Z" level=info msg="CreateContainer within sandbox \"fdc5e245abcb16cec375eab703f4ebe29c95f0994f249f1fcefb3dd7e2b47f61\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0491825e2b9ead831fcecc8e3c4167535c1427f728216e529ce779af084d0853\"" May 15 23:59:17.852611 containerd[1480]: time="2025-05-15T23:59:17.852567491Z" level=info msg="CreateContainer within sandbox \"48ac15992d915a91c4141b4b8b35d5cccde0b8f7f42c6cfc67fb9cf25e84529c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\"" May 15 23:59:17.853249 containerd[1480]: time="2025-05-15T23:59:17.853212868Z" level=info msg="StartContainer for \"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\"" May 15 23:59:17.853249 containerd[1480]: time="2025-05-15T23:59:17.853237675Z" level=info msg="StartContainer for \"0491825e2b9ead831fcecc8e3c4167535c1427f728216e529ce779af084d0853\"" May 15 23:59:17.872417 systemd[1]: Started cri-containerd-08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00.scope - libcontainer container 08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00. May 15 23:59:17.904960 systemd[1]: Started cri-containerd-0491825e2b9ead831fcecc8e3c4167535c1427f728216e529ce779af084d0853.scope - libcontainer container 0491825e2b9ead831fcecc8e3c4167535c1427f728216e529ce779af084d0853. May 15 23:59:17.906610 systemd[1]: Started cri-containerd-c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986.scope - libcontainer container c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986. May 15 23:59:17.948700 containerd[1480]: time="2025-05-15T23:59:17.948595792Z" level=info msg="StartContainer for \"08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00\" returns successfully" May 15 23:59:17.965517 containerd[1480]: time="2025-05-15T23:59:17.965302382Z" level=info msg="StartContainer for \"0491825e2b9ead831fcecc8e3c4167535c1427f728216e529ce779af084d0853\" returns successfully" May 15 23:59:17.965517 containerd[1480]: time="2025-05-15T23:59:17.965437720Z" level=info msg="StartContainer for \"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\" returns successfully" May 15 23:59:18.371626 kubelet[2224]: E0515 23:59:18.371568 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:18.371819 kubelet[2224]: E0515 23:59:18.371778 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:18.372000 kubelet[2224]: E0515 23:59:18.371971 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:18.372122 kubelet[2224]: E0515 23:59:18.372094 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:18.374004 kubelet[2224]: E0515 23:59:18.373975 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:18.374128 kubelet[2224]: E0515 23:59:18.374102 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:18.516769 kubelet[2224]: I0515 23:59:18.516693 2224 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:59:19.215175 kubelet[2224]: E0515 23:59:19.214864 2224 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 23:59:19.306664 kubelet[2224]: I0515 23:59:19.306586 2224 apiserver.go:52] "Watching apiserver" May 15 23:59:19.319118 kubelet[2224]: I0515 23:59:19.319047 2224 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 15 23:59:19.346737 kubelet[2224]: E0515 23:59:19.346579 2224 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183fd8ccf9590160 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 23:59:15.311329632 +0000 UTC m=+0.709108607,LastTimestamp:2025-05-15 23:59:15.311329632 +0000 UTC m=+0.709108607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 23:59:19.376657 kubelet[2224]: E0515 23:59:19.376426 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:19.376657 kubelet[2224]: E0515 23:59:19.376492 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:19.376657 kubelet[2224]: E0515 23:59:19.376594 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:19.376657 kubelet[2224]: E0515 23:59:19.376629 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:19.404492 kubelet[2224]: I0515 23:59:19.404427 2224 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 15 23:59:19.404492 kubelet[2224]: E0515 23:59:19.404508 2224 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 23:59:19.419946 kubelet[2224]: I0515 23:59:19.419904 2224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:59:19.426511 kubelet[2224]: E0515 23:59:19.426213 2224 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 23:59:19.426511 kubelet[2224]: I0515 23:59:19.426270 2224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:59:19.428296 kubelet[2224]: E0515 23:59:19.428258 2224 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 23:59:19.428296 kubelet[2224]: I0515 23:59:19.428296 2224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 23:59:19.429680 kubelet[2224]: E0515 23:59:19.429646 2224 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 15 23:59:20.376643 kubelet[2224]: I0515 23:59:20.376561 2224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 23:59:20.378229 kubelet[2224]: I0515 23:59:20.378198 2224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:59:20.462037 kubelet[2224]: E0515 23:59:20.461988 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:20.518045 kubelet[2224]: E0515 23:59:20.517994 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:21.379852 kubelet[2224]: E0515 23:59:21.379797 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:21.383605 kubelet[2224]: E0515 23:59:21.383516 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:22.064027 systemd[1]: Reloading requested from client PID 2517 ('systemctl') (unit session-7.scope)... May 15 23:59:22.064045 systemd[1]: Reloading... May 15 23:59:22.159778 zram_generator::config[2557]: No configuration found. May 15 23:59:22.316206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:59:22.441521 systemd[1]: Reloading finished in 376 ms. May 15 23:59:22.499536 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:59:22.522443 systemd[1]: kubelet.service: Deactivated successfully. May 15 23:59:22.522889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:59:22.522954 systemd[1]: kubelet.service: Consumed 1.609s CPU time, 138.8M memory peak, 0B memory swap peak. May 15 23:59:22.535446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:59:22.741050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:59:22.747200 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:59:22.816904 kubelet[2600]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:59:22.816904 kubelet[2600]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 23:59:22.816904 kubelet[2600]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:59:22.817452 kubelet[2600]: I0515 23:59:22.816964 2600 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:59:22.829340 kubelet[2600]: I0515 23:59:22.829265 2600 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 15 23:59:22.829340 kubelet[2600]: I0515 23:59:22.829312 2600 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:59:22.829746 kubelet[2600]: I0515 23:59:22.829665 2600 server.go:954] "Client rotation is on, will bootstrap in background" May 15 23:59:22.831557 kubelet[2600]: I0515 23:59:22.831183 2600 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 23:59:22.834360 kubelet[2600]: I0515 23:59:22.834299 2600 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:59:22.837976 kubelet[2600]: E0515 23:59:22.837933 2600 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 23:59:22.837976 kubelet[2600]: I0515 23:59:22.837967 2600 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 23:59:22.844074 kubelet[2600]: I0515 23:59:22.843996 2600 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:59:22.844329 kubelet[2600]: I0515 23:59:22.844289 2600 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:59:22.844554 kubelet[2600]: I0515 23:59:22.844328 2600 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:59:22.844668 kubelet[2600]: I0515 23:59:22.844575 2600 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:59:22.844668 kubelet[2600]: I0515 23:59:22.844589 2600 container_manager_linux.go:304] "Creating device plugin manager" May 15 23:59:22.844668 kubelet[2600]: I0515 23:59:22.844652 2600 state_mem.go:36] "Initialized new in-memory state store" May 15 23:59:22.844892 kubelet[2600]: I0515 23:59:22.844875 2600 kubelet.go:446] "Attempting to sync node with API server" May 15 23:59:22.844938 kubelet[2600]: I0515 23:59:22.844904 2600 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:59:22.844938 kubelet[2600]: I0515 23:59:22.844932 2600 kubelet.go:352] "Adding apiserver pod source" May 15 23:59:22.844996 kubelet[2600]: I0515 23:59:22.844944 2600 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:59:22.848382 kubelet[2600]: I0515 23:59:22.848182 2600 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 23:59:22.851263 kubelet[2600]: I0515 23:59:22.848666 2600 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:59:22.851263 kubelet[2600]: I0515 23:59:22.849252 2600 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 23:59:22.851263 kubelet[2600]: I0515 23:59:22.849293 2600 server.go:1287] "Started kubelet" May 15 23:59:22.854730 kubelet[2600]: I0515 23:59:22.852091 2600 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:59:22.854730 kubelet[2600]: I0515 23:59:22.853013 2600 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:59:22.854730 kubelet[2600]: I0515 23:59:22.853074 2600 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:59:22.854885 kubelet[2600]: I0515 23:59:22.854774 2600 server.go:479] "Adding debug handlers to kubelet server" May 15 23:59:22.858721 kubelet[2600]: I0515 23:59:22.858409 2600 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:59:22.866256 kubelet[2600]: E0515 23:59:22.865341 2600 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:22.866256 kubelet[2600]: I0515 23:59:22.866193 2600 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:59:22.868533 kubelet[2600]: I0515 23:59:22.868401 2600 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 23:59:22.868896 kubelet[2600]: I0515 23:59:22.868869 2600 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 15 23:59:22.869062 kubelet[2600]: I0515 23:59:22.869038 2600 reconciler.go:26] "Reconciler: start to sync state" May 15 23:59:22.871685 kubelet[2600]: I0515 23:59:22.871472 2600 factory.go:221] Registration of the systemd container factory successfully May 15 23:59:22.871685 kubelet[2600]: I0515 23:59:22.871590 2600 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:59:22.876073 kubelet[2600]: E0515 23:59:22.876016 2600 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:59:22.880606 kubelet[2600]: I0515 23:59:22.880079 2600 factory.go:221] Registration of the containerd container factory successfully May 15 23:59:22.891084 kubelet[2600]: I0515 23:59:22.891022 2600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:59:22.893296 kubelet[2600]: I0515 23:59:22.892755 2600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:59:22.893296 kubelet[2600]: I0515 23:59:22.892803 2600 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 23:59:22.893296 kubelet[2600]: I0515 23:59:22.892833 2600 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 23:59:22.893296 kubelet[2600]: I0515 23:59:22.892844 2600 kubelet.go:2382] "Starting kubelet main sync loop" May 15 23:59:22.893296 kubelet[2600]: E0515 23:59:22.892936 2600 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:59:22.936136 kubelet[2600]: I0515 23:59:22.936082 2600 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 23:59:22.936443 kubelet[2600]: I0515 23:59:22.936396 2600 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 23:59:22.936443 kubelet[2600]: I0515 23:59:22.936436 2600 state_mem.go:36] "Initialized new in-memory state store" May 15 23:59:22.936748 kubelet[2600]: I0515 23:59:22.936698 2600 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 23:59:22.936809 kubelet[2600]: I0515 23:59:22.936742 2600 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 23:59:22.936809 kubelet[2600]: I0515 23:59:22.936770 2600 policy_none.go:49] "None policy: Start" May 15 23:59:22.936809 kubelet[2600]: I0515 23:59:22.936784 2600 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 23:59:22.936809 kubelet[2600]: I0515 23:59:22.936798 2600 state_mem.go:35] "Initializing new in-memory state store" May 15 23:59:22.936989 kubelet[2600]: I0515 23:59:22.936965 2600 state_mem.go:75] "Updated machine memory state" May 15 23:59:22.944597 kubelet[2600]: I0515 23:59:22.944213 2600 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:59:22.944844 kubelet[2600]: I0515 23:59:22.944626 2600 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:59:22.944844 kubelet[2600]: I0515 23:59:22.944662 2600 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:59:22.945031 kubelet[2600]: I0515 23:59:22.944994 2600 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:59:22.947065 kubelet[2600]: E0515 23:59:22.947035 2600 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 23:59:22.994568 kubelet[2600]: I0515 23:59:22.994377 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:59:22.994568 kubelet[2600]: I0515 23:59:22.994415 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:59:22.994568 kubelet[2600]: I0515 23:59:22.994446 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 23:59:23.050313 kubelet[2600]: I0515 23:59:23.050256 2600 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:59:23.070549 kubelet[2600]: I0515 23:59:23.070457 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:23.070549 kubelet[2600]: I0515 23:59:23.070520 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:23.070549 kubelet[2600]: I0515 23:59:23.070552 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:23.070814 kubelet[2600]: I0515 23:59:23.070578 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1788fb037218d783b217e4cf7b71e88a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1788fb037218d783b217e4cf7b71e88a\") " pod="kube-system/kube-apiserver-localhost" May 15 23:59:23.070814 kubelet[2600]: I0515 23:59:23.070608 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1788fb037218d783b217e4cf7b71e88a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1788fb037218d783b217e4cf7b71e88a\") " pod="kube-system/kube-apiserver-localhost" May 15 23:59:23.070814 kubelet[2600]: I0515 23:59:23.070633 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:23.070814 kubelet[2600]: I0515 23:59:23.070656 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:23.070814 kubelet[2600]: I0515 23:59:23.070680 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 15 23:59:23.070946 kubelet[2600]: I0515 23:59:23.070719 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1788fb037218d783b217e4cf7b71e88a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1788fb037218d783b217e4cf7b71e88a\") " pod="kube-system/kube-apiserver-localhost" May 15 23:59:23.199169 kubelet[2600]: E0515 23:59:23.199112 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 23:59:23.199419 kubelet[2600]: E0515 23:59:23.199330 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:23.199579 kubelet[2600]: E0515 23:59:23.199549 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 23:59:23.199694 kubelet[2600]: E0515 23:59:23.199676 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:23.326292 kubelet[2600]: I0515 23:59:23.326234 2600 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 15 23:59:23.326484 kubelet[2600]: I0515 23:59:23.326372 2600 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 15 23:59:23.418603 kubelet[2600]: E0515 23:59:23.418544 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:23.480257 sudo[2638]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 23:59:23.480839 sudo[2638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 23:59:23.846291 kubelet[2600]: I0515 23:59:23.846218 2600 apiserver.go:52] "Watching apiserver" May 15 23:59:23.869853 kubelet[2600]: I0515 23:59:23.869789 2600 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 15 23:59:23.910229 kubelet[2600]: I0515 23:59:23.909944 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:59:23.910229 kubelet[2600]: I0515 23:59:23.910044 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 23:59:23.910229 kubelet[2600]: I0515 23:59:23.910122 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:59:24.034390 sudo[2638]: pam_unix(sudo:session): session closed for user root May 15 23:59:24.134569 kubelet[2600]: E0515 23:59:24.134410 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 23:59:24.135416 kubelet[2600]: E0515 23:59:24.135002 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:24.136096 kubelet[2600]: E0515 23:59:24.135514 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 23:59:24.136096 kubelet[2600]: E0515 23:59:24.135688 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 23:59:24.136184 kubelet[2600]: E0515 23:59:24.136161 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:24.136355 kubelet[2600]: E0515 23:59:24.136330 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:24.137643 kubelet[2600]: I0515 23:59:24.137577 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.137524898 podStartE2EDuration="2.137524898s" podCreationTimestamp="2025-05-15 23:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:59:24.134521709 +0000 UTC m=+1.382356341" watchObservedRunningTime="2025-05-15 23:59:24.137524898 +0000 UTC m=+1.385359550" May 15 23:59:24.372820 kubelet[2600]: I0515 23:59:24.371842 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.371806782 podStartE2EDuration="4.371806782s" podCreationTimestamp="2025-05-15 23:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:59:24.370373176 +0000 UTC m=+1.618207808" watchObservedRunningTime="2025-05-15 23:59:24.371806782 +0000 UTC m=+1.619641424" May 15 23:59:24.542535 kubelet[2600]: I0515 23:59:24.542127 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.542107434 podStartE2EDuration="4.542107434s" podCreationTimestamp="2025-05-15 23:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:59:24.541946081 +0000 UTC m=+1.789780723" watchObservedRunningTime="2025-05-15 23:59:24.542107434 +0000 UTC m=+1.789942066" May 15 23:59:24.912042 kubelet[2600]: E0515 23:59:24.911730 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:24.912042 kubelet[2600]: E0515 23:59:24.911804 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:24.912502 kubelet[2600]: E0515 23:59:24.912162 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:25.913541 kubelet[2600]: E0515 23:59:25.913508 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:25.914026 kubelet[2600]: E0515 23:59:25.913884 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:26.059374 sudo[1662]: pam_unix(sudo:session): session closed for user root May 15 23:59:26.061667 sshd[1661]: Connection closed by 10.0.0.1 port 43886 May 15 23:59:26.072427 sshd-session[1659]: pam_unix(sshd:session): session closed for user core May 15 23:59:26.077511 systemd[1]: sshd@6-10.0.0.123:22-10.0.0.1:43886.service: Deactivated successfully. May 15 23:59:26.079473 systemd[1]: session-7.scope: Deactivated successfully. May 15 23:59:26.079695 systemd[1]: session-7.scope: Consumed 5.383s CPU time, 152.6M memory peak, 0B memory swap peak. May 15 23:59:26.080337 systemd-logind[1469]: Session 7 logged out. Waiting for processes to exit. May 15 23:59:26.081588 systemd-logind[1469]: Removed session 7. May 15 23:59:26.293203 kubelet[2600]: E0515 23:59:26.293144 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:27.074988 kubelet[2600]: I0515 23:59:27.074938 2600 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 23:59:27.075422 containerd[1480]: time="2025-05-15T23:59:27.075266097Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 23:59:27.075686 kubelet[2600]: I0515 23:59:27.075459 2600 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 23:59:27.724974 systemd[1]: Created slice kubepods-burstable-pode80e2ac5_c9cf_4248_8d9f_6f28e35f34b2.slice - libcontainer container kubepods-burstable-pode80e2ac5_c9cf_4248_8d9f_6f28e35f34b2.slice. May 15 23:59:27.808610 kubelet[2600]: I0515 23:59:27.808320 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-host-proc-sys-kernel\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.808610 kubelet[2600]: I0515 23:59:27.808413 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-hubble-tls\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.808610 kubelet[2600]: I0515 23:59:27.808459 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-etc-cni-netd\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.811329 kubelet[2600]: I0515 23:59:27.809109 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-clustermesh-secrets\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.811329 kubelet[2600]: I0515 23:59:27.809158 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-cilium-config-path\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.811329 kubelet[2600]: I0515 23:59:27.809189 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-host-proc-sys-net\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.811329 kubelet[2600]: I0515 23:59:27.809215 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79kbm\" (UniqueName: \"kubernetes.io/projected/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-kube-api-access-79kbm\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.811329 kubelet[2600]: I0515 23:59:27.809253 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-bpf-maps\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.811329 kubelet[2600]: I0515 23:59:27.809283 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-cni-path\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.811632 kubelet[2600]: I0515 23:59:27.809312 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-cilium-cgroup\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.811632 kubelet[2600]: I0515 23:59:27.809339 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-lib-modules\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.811632 kubelet[2600]: I0515 23:59:27.809361 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-xtables-lock\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.811632 kubelet[2600]: I0515 23:59:27.809390 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-cilium-run\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:27.811632 kubelet[2600]: I0515 23:59:27.809417 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2-hostproc\") pod \"cilium-k4b8r\" (UID: \"e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2\") " pod="kube-system/cilium-k4b8r" May 15 23:59:28.033620 systemd[1]: Created slice kubepods-besteffort-pod45be1e59_cf2b_495e_b9fa_b2bf550e6acc.slice - libcontainer container kubepods-besteffort-pod45be1e59_cf2b_495e_b9fa_b2bf550e6acc.slice. May 15 23:59:28.111918 kubelet[2600]: I0515 23:59:28.111836 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/45be1e59-cf2b-495e-b9fa-b2bf550e6acc-kube-proxy\") pod \"kube-proxy-lzd7x\" (UID: \"45be1e59-cf2b-495e-b9fa-b2bf550e6acc\") " pod="kube-system/kube-proxy-lzd7x" May 15 23:59:28.111918 kubelet[2600]: I0515 23:59:28.111895 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45be1e59-cf2b-495e-b9fa-b2bf550e6acc-xtables-lock\") pod \"kube-proxy-lzd7x\" (UID: \"45be1e59-cf2b-495e-b9fa-b2bf550e6acc\") " pod="kube-system/kube-proxy-lzd7x" May 15 23:59:28.111918 kubelet[2600]: I0515 23:59:28.111920 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjzg7\" (UniqueName: \"kubernetes.io/projected/45be1e59-cf2b-495e-b9fa-b2bf550e6acc-kube-api-access-cjzg7\") pod \"kube-proxy-lzd7x\" (UID: \"45be1e59-cf2b-495e-b9fa-b2bf550e6acc\") " pod="kube-system/kube-proxy-lzd7x" May 15 23:59:28.112530 kubelet[2600]: I0515 23:59:28.111965 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45be1e59-cf2b-495e-b9fa-b2bf550e6acc-lib-modules\") pod \"kube-proxy-lzd7x\" (UID: \"45be1e59-cf2b-495e-b9fa-b2bf550e6acc\") " pod="kube-system/kube-proxy-lzd7x" May 15 23:59:28.280180 systemd[1]: Created slice kubepods-besteffort-podc48ca35d_8e25_4e65_b32b_1042890d985f.slice - libcontainer container kubepods-besteffort-podc48ca35d_8e25_4e65_b32b_1042890d985f.slice. May 15 23:59:28.312836 kubelet[2600]: I0515 23:59:28.312776 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brcdz\" (UniqueName: \"kubernetes.io/projected/c48ca35d-8e25-4e65-b32b-1042890d985f-kube-api-access-brcdz\") pod \"cilium-operator-6c4d7847fc-wx8kc\" (UID: \"c48ca35d-8e25-4e65-b32b-1042890d985f\") " pod="kube-system/cilium-operator-6c4d7847fc-wx8kc" May 15 23:59:28.312836 kubelet[2600]: I0515 23:59:28.312826 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c48ca35d-8e25-4e65-b32b-1042890d985f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wx8kc\" (UID: \"c48ca35d-8e25-4e65-b32b-1042890d985f\") " pod="kube-system/cilium-operator-6c4d7847fc-wx8kc" May 15 23:59:28.331347 kubelet[2600]: E0515 23:59:28.331284 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:28.332269 containerd[1480]: time="2025-05-15T23:59:28.332181452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k4b8r,Uid:e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2,Namespace:kube-system,Attempt:0,}" May 15 23:59:28.340859 kubelet[2600]: E0515 23:59:28.340813 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:28.341426 containerd[1480]: time="2025-05-15T23:59:28.341373392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzd7x,Uid:45be1e59-cf2b-495e-b9fa-b2bf550e6acc,Namespace:kube-system,Attempt:0,}" May 15 23:59:28.378568 containerd[1480]: time="2025-05-15T23:59:28.378439570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:59:28.378700 containerd[1480]: time="2025-05-15T23:59:28.378572387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:59:28.378700 containerd[1480]: time="2025-05-15T23:59:28.378612247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:28.378829 containerd[1480]: time="2025-05-15T23:59:28.378779577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:28.397687 containerd[1480]: time="2025-05-15T23:59:28.397360071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:59:28.398233 containerd[1480]: time="2025-05-15T23:59:28.397470146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:59:28.398233 containerd[1480]: time="2025-05-15T23:59:28.397490416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:28.398233 containerd[1480]: time="2025-05-15T23:59:28.397652398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:28.403979 systemd[1]: Started cri-containerd-91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54.scope - libcontainer container 91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54. May 15 23:59:28.430037 systemd[1]: Started cri-containerd-20a9e573831652ac72ff37061945fd7151aeabdf657cfada30c360e95b2ffe96.scope - libcontainer container 20a9e573831652ac72ff37061945fd7151aeabdf657cfada30c360e95b2ffe96. May 15 23:59:28.446404 containerd[1480]: time="2025-05-15T23:59:28.446009447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k4b8r,Uid:e80e2ac5-c9cf-4248-8d9f-6f28e35f34b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54\"" May 15 23:59:28.447362 kubelet[2600]: E0515 23:59:28.447304 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:28.449485 containerd[1480]: time="2025-05-15T23:59:28.449376709Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 23:59:28.459519 containerd[1480]: time="2025-05-15T23:59:28.459467860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzd7x,Uid:45be1e59-cf2b-495e-b9fa-b2bf550e6acc,Namespace:kube-system,Attempt:0,} returns sandbox id \"20a9e573831652ac72ff37061945fd7151aeabdf657cfada30c360e95b2ffe96\"" May 15 23:59:28.460348 kubelet[2600]: E0515 23:59:28.460137 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:28.462149 containerd[1480]: time="2025-05-15T23:59:28.462123321Z" level=info msg="CreateContainer within sandbox \"20a9e573831652ac72ff37061945fd7151aeabdf657cfada30c360e95b2ffe96\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 23:59:28.583412 kubelet[2600]: E0515 23:59:28.583251 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:28.583816 containerd[1480]: time="2025-05-15T23:59:28.583776348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wx8kc,Uid:c48ca35d-8e25-4e65-b32b-1042890d985f,Namespace:kube-system,Attempt:0,}" May 15 23:59:28.901339 containerd[1480]: time="2025-05-15T23:59:28.901209449Z" level=info msg="CreateContainer within sandbox \"20a9e573831652ac72ff37061945fd7151aeabdf657cfada30c360e95b2ffe96\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b3135ddb7c43b0af699abce77dbe95622a94ffcc795242e81c40777826ca0c86\"" May 15 23:59:28.902765 containerd[1480]: time="2025-05-15T23:59:28.901812359Z" level=info msg="StartContainer for \"b3135ddb7c43b0af699abce77dbe95622a94ffcc795242e81c40777826ca0c86\"" May 15 23:59:28.919205 containerd[1480]: time="2025-05-15T23:59:28.918858855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:59:28.919205 containerd[1480]: time="2025-05-15T23:59:28.918923341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:59:28.919205 containerd[1480]: time="2025-05-15T23:59:28.918937152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:28.919205 containerd[1480]: time="2025-05-15T23:59:28.919061608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:59:28.957962 systemd[1]: Started cri-containerd-b3135ddb7c43b0af699abce77dbe95622a94ffcc795242e81c40777826ca0c86.scope - libcontainer container b3135ddb7c43b0af699abce77dbe95622a94ffcc795242e81c40777826ca0c86. May 15 23:59:28.960143 systemd[1]: Started cri-containerd-b6d778582fd8e215e07f0b63566f0522852b0bd159fcef8cb98b46001d5b4051.scope - libcontainer container b6d778582fd8e215e07f0b63566f0522852b0bd159fcef8cb98b46001d5b4051. May 15 23:59:29.009079 containerd[1480]: time="2025-05-15T23:59:29.009009812Z" level=info msg="StartContainer for \"b3135ddb7c43b0af699abce77dbe95622a94ffcc795242e81c40777826ca0c86\" returns successfully" May 15 23:59:29.009243 containerd[1480]: time="2025-05-15T23:59:29.009081942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wx8kc,Uid:c48ca35d-8e25-4e65-b32b-1042890d985f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6d778582fd8e215e07f0b63566f0522852b0bd159fcef8cb98b46001d5b4051\"" May 15 23:59:29.011294 kubelet[2600]: E0515 23:59:29.010959 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:29.943817 kubelet[2600]: E0515 23:59:29.943444 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:30.951206 kubelet[2600]: E0515 23:59:30.950863 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:33.326282 kubelet[2600]: E0515 23:59:33.326236 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:33.343765 kubelet[2600]: I0515 23:59:33.343671 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lzd7x" podStartSLOduration=6.343651976 podStartE2EDuration="6.343651976s" podCreationTimestamp="2025-05-15 23:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:59:29.972740892 +0000 UTC m=+7.220575554" watchObservedRunningTime="2025-05-15 23:59:33.343651976 +0000 UTC m=+10.591486628" May 15 23:59:33.956504 kubelet[2600]: E0515 23:59:33.956458 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:34.959807 kubelet[2600]: E0515 23:59:34.959364 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:35.317267 kubelet[2600]: E0515 23:59:35.316185 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:35.561662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2525058353.mount: Deactivated successfully. May 15 23:59:36.152882 kubelet[2600]: E0515 23:59:36.152798 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:36.299386 kubelet[2600]: E0515 23:59:36.298305 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:40.989699 containerd[1480]: time="2025-05-15T23:59:40.989604670Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:40.999257 containerd[1480]: time="2025-05-15T23:59:40.999183802Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 15 23:59:41.017730 containerd[1480]: time="2025-05-15T23:59:41.017649597Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:41.020220 containerd[1480]: time="2025-05-15T23:59:41.020176528Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.570761069s" May 15 23:59:41.020220 containerd[1480]: time="2025-05-15T23:59:41.020217838Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 23:59:41.029811 containerd[1480]: time="2025-05-15T23:59:41.029739859Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 23:59:41.031163 containerd[1480]: time="2025-05-15T23:59:41.031100335Z" level=info msg="CreateContainer within sandbox \"91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:59:41.077737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183096197.mount: Deactivated successfully. May 15 23:59:41.083154 containerd[1480]: time="2025-05-15T23:59:41.083106313Z" level=info msg="CreateContainer within sandbox \"91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5685eb3c0ef10c25157f457acdca0203e5b0e29a3f23c32e919e5cdb76b563a2\"" May 15 23:59:41.083684 containerd[1480]: time="2025-05-15T23:59:41.083657354Z" level=info msg="StartContainer for \"5685eb3c0ef10c25157f457acdca0203e5b0e29a3f23c32e919e5cdb76b563a2\"" May 15 23:59:41.120019 systemd[1]: Started cri-containerd-5685eb3c0ef10c25157f457acdca0203e5b0e29a3f23c32e919e5cdb76b563a2.scope - libcontainer container 5685eb3c0ef10c25157f457acdca0203e5b0e29a3f23c32e919e5cdb76b563a2. May 15 23:59:41.159147 containerd[1480]: time="2025-05-15T23:59:41.158954664Z" level=info msg="StartContainer for \"5685eb3c0ef10c25157f457acdca0203e5b0e29a3f23c32e919e5cdb76b563a2\" returns successfully" May 15 23:59:41.168336 kubelet[2600]: E0515 23:59:41.168283 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:41.175655 systemd[1]: cri-containerd-5685eb3c0ef10c25157f457acdca0203e5b0e29a3f23c32e919e5cdb76b563a2.scope: Deactivated successfully. May 15 23:59:42.075020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5685eb3c0ef10c25157f457acdca0203e5b0e29a3f23c32e919e5cdb76b563a2-rootfs.mount: Deactivated successfully. May 15 23:59:42.170580 kubelet[2600]: E0515 23:59:42.170512 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:42.780761 containerd[1480]: time="2025-05-15T23:59:42.780664742Z" level=info msg="shim disconnected" id=5685eb3c0ef10c25157f457acdca0203e5b0e29a3f23c32e919e5cdb76b563a2 namespace=k8s.io May 15 23:59:42.780761 containerd[1480]: time="2025-05-15T23:59:42.780756621Z" level=warning msg="cleaning up after shim disconnected" id=5685eb3c0ef10c25157f457acdca0203e5b0e29a3f23c32e919e5cdb76b563a2 namespace=k8s.io May 15 23:59:42.780761 containerd[1480]: time="2025-05-15T23:59:42.780768962Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:59:43.175964 kubelet[2600]: E0515 23:59:43.175892 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:43.187046 containerd[1480]: time="2025-05-15T23:59:43.186957128Z" level=info msg="CreateContainer within sandbox \"91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:59:43.228106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4053949803.mount: Deactivated successfully. May 15 23:59:43.231491 containerd[1480]: time="2025-05-15T23:59:43.231220207Z" level=info msg="CreateContainer within sandbox \"91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8573b563dae4230b8a05ab6c5db2b2b69c4cc3f795f33b65f99eea59c5d29567\"" May 15 23:59:43.234538 containerd[1480]: time="2025-05-15T23:59:43.232264614Z" level=info msg="StartContainer for \"8573b563dae4230b8a05ab6c5db2b2b69c4cc3f795f33b65f99eea59c5d29567\"" May 15 23:59:43.275151 systemd[1]: Started cri-containerd-8573b563dae4230b8a05ab6c5db2b2b69c4cc3f795f33b65f99eea59c5d29567.scope - libcontainer container 8573b563dae4230b8a05ab6c5db2b2b69c4cc3f795f33b65f99eea59c5d29567. May 15 23:59:43.322100 containerd[1480]: time="2025-05-15T23:59:43.322033140Z" level=info msg="StartContainer for \"8573b563dae4230b8a05ab6c5db2b2b69c4cc3f795f33b65f99eea59c5d29567\" returns successfully" May 15 23:59:43.339577 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:59:43.339947 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:59:43.340068 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 23:59:43.347664 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:59:43.348339 systemd[1]: cri-containerd-8573b563dae4230b8a05ab6c5db2b2b69c4cc3f795f33b65f99eea59c5d29567.scope: Deactivated successfully. May 15 23:59:43.375806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8573b563dae4230b8a05ab6c5db2b2b69c4cc3f795f33b65f99eea59c5d29567-rootfs.mount: Deactivated successfully. May 15 23:59:43.389515 containerd[1480]: time="2025-05-15T23:59:43.389405415Z" level=info msg="shim disconnected" id=8573b563dae4230b8a05ab6c5db2b2b69c4cc3f795f33b65f99eea59c5d29567 namespace=k8s.io May 15 23:59:43.389515 containerd[1480]: time="2025-05-15T23:59:43.389489920Z" level=warning msg="cleaning up after shim disconnected" id=8573b563dae4230b8a05ab6c5db2b2b69c4cc3f795f33b65f99eea59c5d29567 namespace=k8s.io May 15 23:59:43.389515 containerd[1480]: time="2025-05-15T23:59:43.389502072Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:59:43.390028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:59:44.179016 kubelet[2600]: E0515 23:59:44.178961 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:44.181003 containerd[1480]: time="2025-05-15T23:59:44.180936432Z" level=info msg="CreateContainer within sandbox \"91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:59:45.342437 containerd[1480]: time="2025-05-15T23:59:45.342360365Z" level=info msg="CreateContainer within sandbox \"91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1c6dbd53a0c0b07c7f064a41bfa5a179648c7b7f7a5a07f3631e85d7b156ed23\"" May 15 23:59:45.343161 containerd[1480]: time="2025-05-15T23:59:45.343099046Z" level=info msg="StartContainer for \"1c6dbd53a0c0b07c7f064a41bfa5a179648c7b7f7a5a07f3631e85d7b156ed23\"" May 15 23:59:45.381116 systemd[1]: Started cri-containerd-1c6dbd53a0c0b07c7f064a41bfa5a179648c7b7f7a5a07f3631e85d7b156ed23.scope - libcontainer container 1c6dbd53a0c0b07c7f064a41bfa5a179648c7b7f7a5a07f3631e85d7b156ed23. May 15 23:59:45.419701 systemd[1]: cri-containerd-1c6dbd53a0c0b07c7f064a41bfa5a179648c7b7f7a5a07f3631e85d7b156ed23.scope: Deactivated successfully. May 15 23:59:45.586135 containerd[1480]: time="2025-05-15T23:59:45.586016709Z" level=info msg="StartContainer for \"1c6dbd53a0c0b07c7f064a41bfa5a179648c7b7f7a5a07f3631e85d7b156ed23\" returns successfully" May 15 23:59:45.608903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c6dbd53a0c0b07c7f064a41bfa5a179648c7b7f7a5a07f3631e85d7b156ed23-rootfs.mount: Deactivated successfully. May 15 23:59:46.192248 kubelet[2600]: E0515 23:59:46.192177 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:46.641915 containerd[1480]: time="2025-05-15T23:59:46.641833685Z" level=info msg="shim disconnected" id=1c6dbd53a0c0b07c7f064a41bfa5a179648c7b7f7a5a07f3631e85d7b156ed23 namespace=k8s.io May 15 23:59:46.641915 containerd[1480]: time="2025-05-15T23:59:46.641898128Z" level=warning msg="cleaning up after shim disconnected" id=1c6dbd53a0c0b07c7f064a41bfa5a179648c7b7f7a5a07f3631e85d7b156ed23 namespace=k8s.io May 15 23:59:46.641915 containerd[1480]: time="2025-05-15T23:59:46.641910069Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:59:47.196793 kubelet[2600]: E0515 23:59:47.196736 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:47.201777 containerd[1480]: time="2025-05-15T23:59:47.201695139Z" level=info msg="CreateContainer within sandbox \"91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:59:48.002035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86680730.mount: Deactivated successfully. May 15 23:59:48.036868 containerd[1480]: time="2025-05-15T23:59:48.036804720Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:48.130421 containerd[1480]: time="2025-05-15T23:59:48.130309582Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 15 23:59:48.416095 containerd[1480]: time="2025-05-15T23:59:48.416003555Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:48.417778 containerd[1480]: time="2025-05-15T23:59:48.417702381Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.387913487s" May 15 23:59:48.417778 containerd[1480]: time="2025-05-15T23:59:48.417767467Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 23:59:48.420096 containerd[1480]: time="2025-05-15T23:59:48.420066637Z" level=info msg="CreateContainer within sandbox \"b6d778582fd8e215e07f0b63566f0522852b0bd159fcef8cb98b46001d5b4051\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 23:59:48.422820 containerd[1480]: time="2025-05-15T23:59:48.422789096Z" level=info msg="CreateContainer within sandbox \"91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1a53ca5911afe75ba341254aec6bdf4eb3e89c7b49c4b8e9eca7c1c3b10917d5\"" May 15 23:59:48.423225 containerd[1480]: time="2025-05-15T23:59:48.423200994Z" level=info msg="StartContainer for \"1a53ca5911afe75ba341254aec6bdf4eb3e89c7b49c4b8e9eca7c1c3b10917d5\"" May 15 23:59:48.459938 systemd[1]: Started cri-containerd-1a53ca5911afe75ba341254aec6bdf4eb3e89c7b49c4b8e9eca7c1c3b10917d5.scope - libcontainer container 1a53ca5911afe75ba341254aec6bdf4eb3e89c7b49c4b8e9eca7c1c3b10917d5. May 15 23:59:48.487672 systemd[1]: cri-containerd-1a53ca5911afe75ba341254aec6bdf4eb3e89c7b49c4b8e9eca7c1c3b10917d5.scope: Deactivated successfully. May 15 23:59:48.568139 containerd[1480]: time="2025-05-15T23:59:48.568023403Z" level=info msg="StartContainer for \"1a53ca5911afe75ba341254aec6bdf4eb3e89c7b49c4b8e9eca7c1c3b10917d5\" returns successfully" May 15 23:59:48.591089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a53ca5911afe75ba341254aec6bdf4eb3e89c7b49c4b8e9eca7c1c3b10917d5-rootfs.mount: Deactivated successfully. May 15 23:59:48.786953 containerd[1480]: time="2025-05-15T23:59:48.786746572Z" level=info msg="shim disconnected" id=1a53ca5911afe75ba341254aec6bdf4eb3e89c7b49c4b8e9eca7c1c3b10917d5 namespace=k8s.io May 15 23:59:48.786953 containerd[1480]: time="2025-05-15T23:59:48.786833706Z" level=warning msg="cleaning up after shim disconnected" id=1a53ca5911afe75ba341254aec6bdf4eb3e89c7b49c4b8e9eca7c1c3b10917d5 namespace=k8s.io May 15 23:59:48.786953 containerd[1480]: time="2025-05-15T23:59:48.786855585Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:59:49.201911 kubelet[2600]: E0515 23:59:49.201642 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:49.203341 containerd[1480]: time="2025-05-15T23:59:49.203299174Z" level=info msg="CreateContainer within sandbox \"91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:59:49.295872 containerd[1480]: time="2025-05-15T23:59:49.295788077Z" level=info msg="CreateContainer within sandbox \"b6d778582fd8e215e07f0b63566f0522852b0bd159fcef8cb98b46001d5b4051\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\"" May 15 23:59:49.296498 containerd[1480]: time="2025-05-15T23:59:49.296451266Z" level=info msg="StartContainer for \"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\"" May 15 23:59:49.338462 systemd[1]: Started cri-containerd-8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040.scope - libcontainer container 8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040. May 15 23:59:49.432096 containerd[1480]: time="2025-05-15T23:59:49.432025910Z" level=info msg="StartContainer for \"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" returns successfully" May 15 23:59:49.487284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782322922.mount: Deactivated successfully. May 15 23:59:49.692255 containerd[1480]: time="2025-05-15T23:59:49.692147115Z" level=info msg="CreateContainer within sandbox \"91fca2fdbc44a8e28a9209fb83e5b83570f7064c4c4ea72ed00e183fee44fb54\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d30f78ddd708f0863c404a4ae63e8183ac4c0e2cf09d9c0d5d5afa77c532812b\"" May 15 23:59:49.692843 containerd[1480]: time="2025-05-15T23:59:49.692818938Z" level=info msg="StartContainer for \"d30f78ddd708f0863c404a4ae63e8183ac4c0e2cf09d9c0d5d5afa77c532812b\"" May 15 23:59:49.722854 systemd[1]: Started cri-containerd-d30f78ddd708f0863c404a4ae63e8183ac4c0e2cf09d9c0d5d5afa77c532812b.scope - libcontainer container d30f78ddd708f0863c404a4ae63e8183ac4c0e2cf09d9c0d5d5afa77c532812b. May 15 23:59:50.009891 containerd[1480]: time="2025-05-15T23:59:50.009827661Z" level=info msg="StartContainer for \"d30f78ddd708f0863c404a4ae63e8183ac4c0e2cf09d9c0d5d5afa77c532812b\" returns successfully" May 15 23:59:50.030933 systemd[1]: run-containerd-runc-k8s.io-d30f78ddd708f0863c404a4ae63e8183ac4c0e2cf09d9c0d5d5afa77c532812b-runc.1qPFT5.mount: Deactivated successfully. May 15 23:59:50.153980 kubelet[2600]: I0515 23:59:50.153943 2600 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 15 23:59:50.207466 kubelet[2600]: E0515 23:59:50.207236 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:50.209659 kubelet[2600]: E0515 23:59:50.209627 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:50.792752 kubelet[2600]: I0515 23:59:50.791182 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k4b8r" podStartSLOduration=11.210497912 podStartE2EDuration="23.791155233s" podCreationTimestamp="2025-05-15 23:59:27 +0000 UTC" firstStartedPulling="2025-05-15 23:59:28.448870184 +0000 UTC m=+5.696704816" lastFinishedPulling="2025-05-15 23:59:41.029527505 +0000 UTC m=+18.277362137" observedRunningTime="2025-05-15 23:59:50.790573227 +0000 UTC m=+28.038407859" watchObservedRunningTime="2025-05-15 23:59:50.791155233 +0000 UTC m=+28.038989865" May 15 23:59:51.157388 systemd[1]: Created slice kubepods-burstable-pod502efef5_98bf_4ba0_bd67_d56d7acfda7f.slice - libcontainer container kubepods-burstable-pod502efef5_98bf_4ba0_bd67_d56d7acfda7f.slice. May 15 23:59:51.164359 systemd[1]: Created slice kubepods-burstable-pod9354b0d9_a90e_4197_98e6_f03d0e1c38f6.slice - libcontainer container kubepods-burstable-pod9354b0d9_a90e_4197_98e6_f03d0e1c38f6.slice. May 15 23:59:51.183054 kubelet[2600]: I0515 23:59:51.182974 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz5p7\" (UniqueName: \"kubernetes.io/projected/502efef5-98bf-4ba0-bd67-d56d7acfda7f-kube-api-access-zz5p7\") pod \"coredns-668d6bf9bc-wbhml\" (UID: \"502efef5-98bf-4ba0-bd67-d56d7acfda7f\") " pod="kube-system/coredns-668d6bf9bc-wbhml" May 15 23:59:51.183054 kubelet[2600]: I0515 23:59:51.183035 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9354b0d9-a90e-4197-98e6-f03d0e1c38f6-config-volume\") pod \"coredns-668d6bf9bc-cwg2r\" (UID: \"9354b0d9-a90e-4197-98e6-f03d0e1c38f6\") " pod="kube-system/coredns-668d6bf9bc-cwg2r" May 15 23:59:51.183054 kubelet[2600]: I0515 23:59:51.183053 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2qdf\" (UniqueName: \"kubernetes.io/projected/9354b0d9-a90e-4197-98e6-f03d0e1c38f6-kube-api-access-m2qdf\") pod \"coredns-668d6bf9bc-cwg2r\" (UID: \"9354b0d9-a90e-4197-98e6-f03d0e1c38f6\") " pod="kube-system/coredns-668d6bf9bc-cwg2r" May 15 23:59:51.183054 kubelet[2600]: I0515 23:59:51.183067 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/502efef5-98bf-4ba0-bd67-d56d7acfda7f-config-volume\") pod \"coredns-668d6bf9bc-wbhml\" (UID: \"502efef5-98bf-4ba0-bd67-d56d7acfda7f\") " pod="kube-system/coredns-668d6bf9bc-wbhml" May 15 23:59:51.216964 kubelet[2600]: E0515 23:59:51.216924 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:51.218222 kubelet[2600]: E0515 23:59:51.218194 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:52.729871 kubelet[2600]: I0515 23:59:52.725873 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wx8kc" podStartSLOduration=5.318929017 podStartE2EDuration="24.725852492s" podCreationTimestamp="2025-05-15 23:59:28 +0000 UTC" firstStartedPulling="2025-05-15 23:59:29.01151571 +0000 UTC m=+6.259350343" lastFinishedPulling="2025-05-15 23:59:48.418439186 +0000 UTC m=+25.666273818" observedRunningTime="2025-05-15 23:59:51.17386299 +0000 UTC m=+28.421697622" watchObservedRunningTime="2025-05-15 23:59:52.725852492 +0000 UTC m=+29.973687124" May 15 23:59:52.961949 kubelet[2600]: E0515 23:59:52.961874 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:52.962693 containerd[1480]: time="2025-05-15T23:59:52.962634858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbhml,Uid:502efef5-98bf-4ba0-bd67-d56d7acfda7f,Namespace:kube-system,Attempt:0,}" May 15 23:59:52.967062 kubelet[2600]: E0515 23:59:52.967025 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:52.967613 containerd[1480]: time="2025-05-15T23:59:52.967522367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cwg2r,Uid:9354b0d9-a90e-4197-98e6-f03d0e1c38f6,Namespace:kube-system,Attempt:0,}" May 15 23:59:53.665266 systemd[1]: Started sshd@7-10.0.0.123:22-10.0.0.1:44698.service - OpenSSH per-connection server daemon (10.0.0.1:44698). May 15 23:59:53.720969 sshd[3409]: Accepted publickey for core from 10.0.0.1 port 44698 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:53.722737 sshd-session[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:53.727638 systemd-logind[1469]: New session 8 of user core. May 15 23:59:53.733847 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 23:59:54.179097 sshd[3411]: Connection closed by 10.0.0.1 port 44698 May 15 23:59:54.179474 sshd-session[3409]: pam_unix(sshd:session): session closed for user core May 15 23:59:54.183552 systemd[1]: sshd@7-10.0.0.123:22-10.0.0.1:44698.service: Deactivated successfully. May 15 23:59:54.185657 systemd[1]: session-8.scope: Deactivated successfully. May 15 23:59:54.186285 systemd-logind[1469]: Session 8 logged out. Waiting for processes to exit. May 15 23:59:54.187252 systemd-logind[1469]: Removed session 8. May 15 23:59:54.480214 systemd-networkd[1423]: cilium_host: Link UP May 15 23:59:54.481090 systemd-networkd[1423]: cilium_net: Link UP May 15 23:59:54.481276 systemd-networkd[1423]: cilium_net: Gained carrier May 15 23:59:54.481501 systemd-networkd[1423]: cilium_host: Gained carrier May 15 23:59:54.618967 systemd-networkd[1423]: cilium_vxlan: Link UP May 15 23:59:54.618979 systemd-networkd[1423]: cilium_vxlan: Gained carrier May 15 23:59:54.659870 systemd-networkd[1423]: cilium_host: Gained IPv6LL May 15 23:59:54.869762 kernel: NET: Registered PF_ALG protocol family May 15 23:59:55.449466 systemd-networkd[1423]: cilium_net: Gained IPv6LL May 15 23:59:55.668243 systemd-networkd[1423]: lxc_health: Link UP May 15 23:59:55.677735 systemd-networkd[1423]: lxc_health: Gained carrier May 15 23:59:55.706349 systemd-networkd[1423]: cilium_vxlan: Gained IPv6LL May 15 23:59:55.778228 systemd-networkd[1423]: lxce4a9627b3b8b: Link UP May 15 23:59:55.792353 kernel: eth0: renamed from tmp66f90 May 15 23:59:55.803684 systemd-networkd[1423]: lxce4a9627b3b8b: Gained carrier May 15 23:59:55.808252 systemd-networkd[1423]: lxc90c758f2119c: Link UP May 15 23:59:55.820837 kernel: eth0: renamed from tmpef782 May 15 23:59:55.827221 systemd-networkd[1423]: lxc90c758f2119c: Gained carrier May 15 23:59:56.333363 kubelet[2600]: E0515 23:59:56.333310 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:57.236195 kubelet[2600]: E0515 23:59:57.236154 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:57.435502 systemd-networkd[1423]: lxce4a9627b3b8b: Gained IPv6LL May 15 23:59:57.689109 systemd-networkd[1423]: lxc_health: Gained IPv6LL May 15 23:59:57.689572 systemd-networkd[1423]: lxc90c758f2119c: Gained IPv6LL May 15 23:59:58.245332 kubelet[2600]: E0515 23:59:58.244255 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:59.200824 systemd[1]: Started sshd@8-10.0.0.123:22-10.0.0.1:49578.service - OpenSSH per-connection server daemon (10.0.0.1:49578). May 15 23:59:59.278233 sshd[3834]: Accepted publickey for core from 10.0.0.1 port 49578 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 15 23:59:59.286631 sshd-session[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:59.301950 systemd-logind[1469]: New session 9 of user core. May 15 23:59:59.317359 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 23:59:59.529748 sshd[3836]: Connection closed by 10.0.0.1 port 49578 May 15 23:59:59.532171 sshd-session[3834]: pam_unix(sshd:session): session closed for user core May 15 23:59:59.537220 systemd[1]: sshd@8-10.0.0.123:22-10.0.0.1:49578.service: Deactivated successfully. May 15 23:59:59.539594 systemd[1]: session-9.scope: Deactivated successfully. May 15 23:59:59.544666 systemd-logind[1469]: Session 9 logged out. Waiting for processes to exit. May 15 23:59:59.545871 systemd-logind[1469]: Removed session 9. May 16 00:00:00.754750 containerd[1480]: time="2025-05-16T00:00:00.754497403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:00:00.754750 containerd[1480]: time="2025-05-16T00:00:00.754625607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:00:00.754750 containerd[1480]: time="2025-05-16T00:00:00.754640134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:00:00.755413 containerd[1480]: time="2025-05-16T00:00:00.754786020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:00:00.784133 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 16 00:00:00.812955 systemd[1]: Started cri-containerd-ef782d367192d9609eb1bc08d023b3afcf307c0e09ce33961f4a26092c40d0a4.scope - libcontainer container ef782d367192d9609eb1bc08d023b3afcf307c0e09ce33961f4a26092c40d0a4. May 16 00:00:00.827665 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:00:00.839872 containerd[1480]: time="2025-05-16T00:00:00.839753728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:00:00.839872 containerd[1480]: time="2025-05-16T00:00:00.839817144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:00:00.839872 containerd[1480]: time="2025-05-16T00:00:00.839831661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:00:00.840094 containerd[1480]: time="2025-05-16T00:00:00.839924210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:00:00.872049 systemd[1]: Started cri-containerd-66f90ced2089a02ed9a4bac8940280516a37d262baaf9db160061586ef9da973.scope - libcontainer container 66f90ced2089a02ed9a4bac8940280516a37d262baaf9db160061586ef9da973. May 16 00:00:00.873171 systemd[1]: logrotate.service: Deactivated successfully. May 16 00:00:00.892352 containerd[1480]: time="2025-05-16T00:00:00.892197044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cwg2r,Uid:9354b0d9-a90e-4197-98e6-f03d0e1c38f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef782d367192d9609eb1bc08d023b3afcf307c0e09ce33961f4a26092c40d0a4\"" May 16 00:00:00.896387 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:00:00.897398 kubelet[2600]: E0516 00:00:00.897300 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:00.901687 containerd[1480]: time="2025-05-16T00:00:00.901626371Z" level=info msg="CreateContainer within sandbox \"ef782d367192d9609eb1bc08d023b3afcf307c0e09ce33961f4a26092c40d0a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:00:00.927849 containerd[1480]: time="2025-05-16T00:00:00.927803038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbhml,Uid:502efef5-98bf-4ba0-bd67-d56d7acfda7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"66f90ced2089a02ed9a4bac8940280516a37d262baaf9db160061586ef9da973\"" May 16 00:00:00.928811 kubelet[2600]: E0516 00:00:00.928650 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:00.930917 containerd[1480]: time="2025-05-16T00:00:00.930865903Z" level=info msg="CreateContainer within sandbox \"66f90ced2089a02ed9a4bac8940280516a37d262baaf9db160061586ef9da973\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:00:01.758691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1814506564.mount: Deactivated successfully. May 16 00:00:02.125109 containerd[1480]: time="2025-05-16T00:00:02.125035276Z" level=info msg="CreateContainer within sandbox \"ef782d367192d9609eb1bc08d023b3afcf307c0e09ce33961f4a26092c40d0a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"62785481159ccab21517cc4113dae9664e886e6373f6ab2f31089131490cacde\"" May 16 00:00:02.125657 containerd[1480]: time="2025-05-16T00:00:02.125610030Z" level=info msg="StartContainer for \"62785481159ccab21517cc4113dae9664e886e6373f6ab2f31089131490cacde\"" May 16 00:00:02.158039 systemd[1]: Started cri-containerd-62785481159ccab21517cc4113dae9664e886e6373f6ab2f31089131490cacde.scope - libcontainer container 62785481159ccab21517cc4113dae9664e886e6373f6ab2f31089131490cacde. May 16 00:00:02.525890 containerd[1480]: time="2025-05-16T00:00:02.525655393Z" level=info msg="CreateContainer within sandbox \"66f90ced2089a02ed9a4bac8940280516a37d262baaf9db160061586ef9da973\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18574b1de9bf1c1ed9ea8146bfb3b277771bd6f8ee32eb5c4cb9205f717cfce5\"" May 16 00:00:02.525890 containerd[1480]: time="2025-05-16T00:00:02.525694494Z" level=info msg="StartContainer for \"62785481159ccab21517cc4113dae9664e886e6373f6ab2f31089131490cacde\" returns successfully" May 16 00:00:02.526754 containerd[1480]: time="2025-05-16T00:00:02.526472580Z" level=info msg="StartContainer for \"18574b1de9bf1c1ed9ea8146bfb3b277771bd6f8ee32eb5c4cb9205f717cfce5\"" May 16 00:00:02.559985 systemd[1]: Started cri-containerd-18574b1de9bf1c1ed9ea8146bfb3b277771bd6f8ee32eb5c4cb9205f717cfce5.scope - libcontainer container 18574b1de9bf1c1ed9ea8146bfb3b277771bd6f8ee32eb5c4cb9205f717cfce5. May 16 00:00:02.681864 containerd[1480]: time="2025-05-16T00:00:02.681794594Z" level=info msg="StartContainer for \"18574b1de9bf1c1ed9ea8146bfb3b277771bd6f8ee32eb5c4cb9205f717cfce5\" returns successfully" May 16 00:00:03.267622 kubelet[2600]: E0516 00:00:03.267580 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:03.270820 kubelet[2600]: E0516 00:00:03.270785 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:03.429784 kubelet[2600]: I0516 00:00:03.429282 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cwg2r" podStartSLOduration=35.429267611 podStartE2EDuration="35.429267611s" podCreationTimestamp="2025-05-15 23:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:00:03.428889858 +0000 UTC m=+40.676724500" watchObservedRunningTime="2025-05-16 00:00:03.429267611 +0000 UTC m=+40.677102233" May 16 00:00:03.614837 kubelet[2600]: I0516 00:00:03.614747 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wbhml" podStartSLOduration=35.614703225 podStartE2EDuration="35.614703225s" podCreationTimestamp="2025-05-15 23:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:00:03.614087325 +0000 UTC m=+40.861921967" watchObservedRunningTime="2025-05-16 00:00:03.614703225 +0000 UTC m=+40.862537858" May 16 00:00:04.273137 kubelet[2600]: E0516 00:00:04.272690 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:04.273137 kubelet[2600]: E0516 00:00:04.272977 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:04.545335 systemd[1]: Started sshd@9-10.0.0.123:22-10.0.0.1:49588.service - OpenSSH per-connection server daemon (10.0.0.1:49588). May 16 00:00:04.599376 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 49588 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:04.601570 sshd-session[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:04.606497 systemd-logind[1469]: New session 10 of user core. May 16 00:00:04.615382 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 00:00:04.820954 sshd[4024]: Connection closed by 10.0.0.1 port 49588 May 16 00:00:04.821287 sshd-session[4020]: pam_unix(sshd:session): session closed for user core May 16 00:00:04.824577 systemd[1]: sshd@9-10.0.0.123:22-10.0.0.1:49588.service: Deactivated successfully. May 16 00:00:04.826527 systemd[1]: session-10.scope: Deactivated successfully. May 16 00:00:04.828128 systemd-logind[1469]: Session 10 logged out. Waiting for processes to exit. May 16 00:00:04.828934 systemd-logind[1469]: Removed session 10. May 16 00:00:05.274907 kubelet[2600]: E0516 00:00:05.274752 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:05.274907 kubelet[2600]: E0516 00:00:05.274826 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:06.280807 kubelet[2600]: E0516 00:00:06.280753 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:09.834162 systemd[1]: Started sshd@10-10.0.0.123:22-10.0.0.1:42064.service - OpenSSH per-connection server daemon (10.0.0.1:42064). May 16 00:00:09.896521 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 42064 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:09.898467 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:09.904172 systemd-logind[1469]: New session 11 of user core. May 16 00:00:09.914026 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 00:00:10.063954 sshd[4043]: Connection closed by 10.0.0.1 port 42064 May 16 00:00:10.064319 sshd-session[4041]: pam_unix(sshd:session): session closed for user core May 16 00:00:10.068043 systemd[1]: sshd@10-10.0.0.123:22-10.0.0.1:42064.service: Deactivated successfully. May 16 00:00:10.070065 systemd[1]: session-11.scope: Deactivated successfully. May 16 00:00:10.070791 systemd-logind[1469]: Session 11 logged out. Waiting for processes to exit. May 16 00:00:10.071744 systemd-logind[1469]: Removed session 11. May 16 00:00:15.077941 systemd[1]: Started sshd@11-10.0.0.123:22-10.0.0.1:42066.service - OpenSSH per-connection server daemon (10.0.0.1:42066). May 16 00:00:15.125489 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 42066 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:15.127665 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:15.133479 systemd-logind[1469]: New session 12 of user core. May 16 00:00:15.139067 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 00:00:15.278839 sshd[4058]: Connection closed by 10.0.0.1 port 42066 May 16 00:00:15.279250 sshd-session[4056]: pam_unix(sshd:session): session closed for user core May 16 00:00:15.283826 systemd[1]: sshd@11-10.0.0.123:22-10.0.0.1:42066.service: Deactivated successfully. May 16 00:00:15.286100 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:00:15.286921 systemd-logind[1469]: Session 12 logged out. Waiting for processes to exit. May 16 00:00:15.288163 systemd-logind[1469]: Removed session 12. May 16 00:00:20.295848 systemd[1]: Started sshd@12-10.0.0.123:22-10.0.0.1:36220.service - OpenSSH per-connection server daemon (10.0.0.1:36220). May 16 00:00:20.339350 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 36220 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:20.341568 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:20.346624 systemd-logind[1469]: New session 13 of user core. May 16 00:00:20.360902 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 00:00:20.621160 sshd[4074]: Connection closed by 10.0.0.1 port 36220 May 16 00:00:20.621397 sshd-session[4072]: pam_unix(sshd:session): session closed for user core May 16 00:00:20.625824 systemd[1]: sshd@12-10.0.0.123:22-10.0.0.1:36220.service: Deactivated successfully. May 16 00:00:20.627573 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:00:20.628254 systemd-logind[1469]: Session 13 logged out. Waiting for processes to exit. May 16 00:00:20.629148 systemd-logind[1469]: Removed session 13. May 16 00:00:25.634646 systemd[1]: Started sshd@13-10.0.0.123:22-10.0.0.1:36234.service - OpenSSH per-connection server daemon (10.0.0.1:36234). May 16 00:00:25.678477 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 36234 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:25.680308 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:25.685058 systemd-logind[1469]: New session 14 of user core. May 16 00:00:25.694875 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 00:00:26.032466 sshd[4091]: Connection closed by 10.0.0.1 port 36234 May 16 00:00:26.032818 sshd-session[4089]: pam_unix(sshd:session): session closed for user core May 16 00:00:26.038409 systemd[1]: sshd@13-10.0.0.123:22-10.0.0.1:36234.service: Deactivated successfully. May 16 00:00:26.040438 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:00:26.041284 systemd-logind[1469]: Session 14 logged out. Waiting for processes to exit. May 16 00:00:26.042338 systemd-logind[1469]: Removed session 14. May 16 00:00:31.048861 systemd[1]: Started sshd@14-10.0.0.123:22-10.0.0.1:57890.service - OpenSSH per-connection server daemon (10.0.0.1:57890). May 16 00:00:31.115028 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 57890 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:31.119125 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:31.126123 systemd-logind[1469]: New session 15 of user core. May 16 00:00:31.136375 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 00:00:31.303625 sshd[4108]: Connection closed by 10.0.0.1 port 57890 May 16 00:00:31.303936 sshd-session[4106]: pam_unix(sshd:session): session closed for user core May 16 00:00:31.317527 systemd[1]: sshd@14-10.0.0.123:22-10.0.0.1:57890.service: Deactivated successfully. May 16 00:00:31.319796 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:00:31.322062 systemd-logind[1469]: Session 15 logged out. Waiting for processes to exit. May 16 00:00:31.330183 systemd[1]: Started sshd@15-10.0.0.123:22-10.0.0.1:57892.service - OpenSSH per-connection server daemon (10.0.0.1:57892). May 16 00:00:31.331821 systemd-logind[1469]: Removed session 15. May 16 00:00:31.374521 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 57892 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:31.376634 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:31.382151 systemd-logind[1469]: New session 16 of user core. May 16 00:00:31.394053 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 00:00:31.624521 sshd[4123]: Connection closed by 10.0.0.1 port 57892 May 16 00:00:31.624889 sshd-session[4121]: pam_unix(sshd:session): session closed for user core May 16 00:00:31.634101 systemd[1]: sshd@15-10.0.0.123:22-10.0.0.1:57892.service: Deactivated successfully. May 16 00:00:31.636352 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:00:31.638248 systemd-logind[1469]: Session 16 logged out. Waiting for processes to exit. May 16 00:00:31.647098 systemd[1]: Started sshd@16-10.0.0.123:22-10.0.0.1:57904.service - OpenSSH per-connection server daemon (10.0.0.1:57904). May 16 00:00:31.648179 systemd-logind[1469]: Removed session 16. May 16 00:00:31.685206 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 57904 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:31.687250 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:31.692323 systemd-logind[1469]: New session 17 of user core. May 16 00:00:31.698973 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 00:00:31.918879 sshd[4136]: Connection closed by 10.0.0.1 port 57904 May 16 00:00:31.919211 sshd-session[4134]: pam_unix(sshd:session): session closed for user core May 16 00:00:31.923078 systemd[1]: sshd@16-10.0.0.123:22-10.0.0.1:57904.service: Deactivated successfully. May 16 00:00:31.926112 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:00:31.927816 systemd-logind[1469]: Session 17 logged out. Waiting for processes to exit. May 16 00:00:31.930457 systemd-logind[1469]: Removed session 17. May 16 00:00:36.930326 systemd[1]: Started sshd@17-10.0.0.123:22-10.0.0.1:57912.service - OpenSSH per-connection server daemon (10.0.0.1:57912). May 16 00:00:37.101640 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:37.102310 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 57912 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:37.106482 systemd-logind[1469]: New session 18 of user core. May 16 00:00:37.117067 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 00:00:37.517131 sshd[4150]: Connection closed by 10.0.0.1 port 57912 May 16 00:00:37.517539 sshd-session[4148]: pam_unix(sshd:session): session closed for user core May 16 00:00:37.521899 systemd[1]: sshd@17-10.0.0.123:22-10.0.0.1:57912.service: Deactivated successfully. May 16 00:00:37.523775 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:00:37.524473 systemd-logind[1469]: Session 18 logged out. Waiting for processes to exit. May 16 00:00:37.525414 systemd-logind[1469]: Removed session 18. May 16 00:00:37.894129 kubelet[2600]: E0516 00:00:37.894069 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:38.894210 kubelet[2600]: E0516 00:00:38.894164 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:38.900442 kubelet[2600]: E0516 00:00:38.900395 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:42.531420 systemd[1]: Started sshd@18-10.0.0.123:22-10.0.0.1:41884.service - OpenSSH per-connection server daemon (10.0.0.1:41884). May 16 00:00:42.575145 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 41884 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:42.643836 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:42.649020 systemd-logind[1469]: New session 19 of user core. May 16 00:00:42.663005 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 00:00:42.981815 sshd[4165]: Connection closed by 10.0.0.1 port 41884 May 16 00:00:42.982271 sshd-session[4163]: pam_unix(sshd:session): session closed for user core May 16 00:00:42.987022 systemd[1]: sshd@18-10.0.0.123:22-10.0.0.1:41884.service: Deactivated successfully. May 16 00:00:42.989223 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:00:42.990328 systemd-logind[1469]: Session 19 logged out. Waiting for processes to exit. May 16 00:00:42.991797 systemd-logind[1469]: Removed session 19. May 16 00:00:47.994977 systemd[1]: Started sshd@19-10.0.0.123:22-10.0.0.1:52314.service - OpenSSH per-connection server daemon (10.0.0.1:52314). May 16 00:00:48.036557 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 52314 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:48.038564 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:48.043021 systemd-logind[1469]: New session 20 of user core. May 16 00:00:48.053097 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 00:00:48.178614 sshd[4179]: Connection closed by 10.0.0.1 port 52314 May 16 00:00:48.179128 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 16 00:00:48.190856 systemd[1]: sshd@19-10.0.0.123:22-10.0.0.1:52314.service: Deactivated successfully. May 16 00:00:48.192943 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:00:48.195037 systemd-logind[1469]: Session 20 logged out. Waiting for processes to exit. May 16 00:00:48.207165 systemd[1]: Started sshd@20-10.0.0.123:22-10.0.0.1:52330.service - OpenSSH per-connection server daemon (10.0.0.1:52330). May 16 00:00:48.208367 systemd-logind[1469]: Removed session 20. May 16 00:00:48.245158 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 52330 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:48.246888 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:48.251331 systemd-logind[1469]: New session 21 of user core. May 16 00:00:48.260885 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 00:00:48.693097 sshd[4193]: Connection closed by 10.0.0.1 port 52330 May 16 00:00:48.695225 sshd-session[4191]: pam_unix(sshd:session): session closed for user core May 16 00:00:48.704208 systemd[1]: sshd@20-10.0.0.123:22-10.0.0.1:52330.service: Deactivated successfully. May 16 00:00:48.707019 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:00:48.709392 systemd-logind[1469]: Session 21 logged out. Waiting for processes to exit. May 16 00:00:48.716290 systemd[1]: Started sshd@21-10.0.0.123:22-10.0.0.1:52338.service - OpenSSH per-connection server daemon (10.0.0.1:52338). May 16 00:00:48.719200 systemd-logind[1469]: Removed session 21. May 16 00:00:48.773143 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 52338 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:48.775686 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:48.784371 systemd-logind[1469]: New session 22 of user core. May 16 00:00:48.789007 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 00:00:49.802601 sshd[4206]: Connection closed by 10.0.0.1 port 52338 May 16 00:00:49.803932 sshd-session[4204]: pam_unix(sshd:session): session closed for user core May 16 00:00:49.812696 systemd[1]: sshd@21-10.0.0.123:22-10.0.0.1:52338.service: Deactivated successfully. May 16 00:00:49.814978 systemd[1]: session-22.scope: Deactivated successfully. May 16 00:00:49.815917 systemd-logind[1469]: Session 22 logged out. Waiting for processes to exit. May 16 00:00:49.823205 systemd[1]: Started sshd@22-10.0.0.123:22-10.0.0.1:52346.service - OpenSSH per-connection server daemon (10.0.0.1:52346). May 16 00:00:49.824869 systemd-logind[1469]: Removed session 22. May 16 00:00:49.867523 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 52346 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:49.869545 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:49.876126 systemd-logind[1469]: New session 23 of user core. May 16 00:00:49.886064 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 00:00:50.590238 sshd[4226]: Connection closed by 10.0.0.1 port 52346 May 16 00:00:50.590805 sshd-session[4224]: pam_unix(sshd:session): session closed for user core May 16 00:00:50.603091 systemd[1]: sshd@22-10.0.0.123:22-10.0.0.1:52346.service: Deactivated successfully. May 16 00:00:50.605295 systemd[1]: session-23.scope: Deactivated successfully. May 16 00:00:50.607222 systemd-logind[1469]: Session 23 logged out. Waiting for processes to exit. May 16 00:00:50.614402 systemd[1]: Started sshd@23-10.0.0.123:22-10.0.0.1:52360.service - OpenSSH per-connection server daemon (10.0.0.1:52360). May 16 00:00:50.615786 systemd-logind[1469]: Removed session 23. May 16 00:00:50.652467 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 52360 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:50.654597 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:50.659760 systemd-logind[1469]: New session 24 of user core. May 16 00:00:50.666962 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 00:00:50.822605 sshd[4238]: Connection closed by 10.0.0.1 port 52360 May 16 00:00:50.823104 sshd-session[4236]: pam_unix(sshd:session): session closed for user core May 16 00:00:50.827292 systemd[1]: sshd@23-10.0.0.123:22-10.0.0.1:52360.service: Deactivated successfully. May 16 00:00:50.829886 systemd[1]: session-24.scope: Deactivated successfully. May 16 00:00:50.830602 systemd-logind[1469]: Session 24 logged out. Waiting for processes to exit. May 16 00:00:50.831620 systemd-logind[1469]: Removed session 24. May 16 00:00:54.893937 kubelet[2600]: E0516 00:00:54.893875 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:55.838957 systemd[1]: Started sshd@24-10.0.0.123:22-10.0.0.1:52370.service - OpenSSH per-connection server daemon (10.0.0.1:52370). May 16 00:00:55.879528 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 52370 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:00:55.881172 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:55.885402 systemd-logind[1469]: New session 25 of user core. May 16 00:00:55.897852 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 00:00:56.037526 sshd[4253]: Connection closed by 10.0.0.1 port 52370 May 16 00:00:56.038059 sshd-session[4251]: pam_unix(sshd:session): session closed for user core May 16 00:00:56.042829 systemd[1]: sshd@24-10.0.0.123:22-10.0.0.1:52370.service: Deactivated successfully. May 16 00:00:56.045185 systemd[1]: session-25.scope: Deactivated successfully. May 16 00:00:56.045878 systemd-logind[1469]: Session 25 logged out. Waiting for processes to exit. May 16 00:00:56.047204 systemd-logind[1469]: Removed session 25. May 16 00:00:58.894456 kubelet[2600]: E0516 00:00:58.894409 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:01.052060 systemd[1]: Started sshd@25-10.0.0.123:22-10.0.0.1:37588.service - OpenSSH per-connection server daemon (10.0.0.1:37588). May 16 00:01:01.101839 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 37588 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:01:01.103516 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:01:01.108760 systemd-logind[1469]: New session 26 of user core. May 16 00:01:01.119053 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 00:01:01.250099 sshd[4270]: Connection closed by 10.0.0.1 port 37588 May 16 00:01:01.250553 sshd-session[4268]: pam_unix(sshd:session): session closed for user core May 16 00:01:01.256203 systemd[1]: sshd@25-10.0.0.123:22-10.0.0.1:37588.service: Deactivated successfully. May 16 00:01:01.258820 systemd[1]: session-26.scope: Deactivated successfully. May 16 00:01:01.260065 systemd-logind[1469]: Session 26 logged out. Waiting for processes to exit. May 16 00:01:01.261446 systemd-logind[1469]: Removed session 26. May 16 00:01:06.262155 systemd[1]: Started sshd@26-10.0.0.123:22-10.0.0.1:37604.service - OpenSSH per-connection server daemon (10.0.0.1:37604). May 16 00:01:06.318492 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 37604 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:01:06.320222 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:01:06.324518 systemd-logind[1469]: New session 27 of user core. May 16 00:01:06.332851 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 00:01:06.692837 sshd[4287]: Connection closed by 10.0.0.1 port 37604 May 16 00:01:06.693291 sshd-session[4285]: pam_unix(sshd:session): session closed for user core May 16 00:01:06.697248 systemd[1]: sshd@26-10.0.0.123:22-10.0.0.1:37604.service: Deactivated successfully. May 16 00:01:06.699320 systemd[1]: session-27.scope: Deactivated successfully. May 16 00:01:06.700103 systemd-logind[1469]: Session 27 logged out. Waiting for processes to exit. May 16 00:01:06.701151 systemd-logind[1469]: Removed session 27. May 16 00:01:06.896450 kubelet[2600]: E0516 00:01:06.896411 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:11.491256 systemd[1]: Started sshd@27-10.0.0.123:22-10.0.0.1:57566.service - OpenSSH per-connection server daemon (10.0.0.1:57566). May 16 00:01:11.535288 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 57566 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:01:11.536815 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:01:11.540560 systemd-logind[1469]: New session 28 of user core. May 16 00:01:11.549874 systemd[1]: Started session-28.scope - Session 28 of User core. May 16 00:01:11.747496 sshd[4301]: Connection closed by 10.0.0.1 port 57566 May 16 00:01:11.747879 sshd-session[4299]: pam_unix(sshd:session): session closed for user core May 16 00:01:11.752600 systemd[1]: sshd@27-10.0.0.123:22-10.0.0.1:57566.service: Deactivated successfully. May 16 00:01:11.755167 systemd[1]: session-28.scope: Deactivated successfully. May 16 00:01:11.756093 systemd-logind[1469]: Session 28 logged out. Waiting for processes to exit. May 16 00:01:11.757267 systemd-logind[1469]: Removed session 28. May 16 00:01:16.760568 systemd[1]: Started sshd@28-10.0.0.123:22-10.0.0.1:57576.service - OpenSSH per-connection server daemon (10.0.0.1:57576). May 16 00:01:16.804121 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 57576 ssh2: RSA SHA256:L3yZfbXssnmWmNQgSLQO5MCD+Z7L962g53ofle9cx1k May 16 00:01:16.805936 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:01:16.811126 systemd-logind[1469]: New session 29 of user core. May 16 00:01:16.817920 systemd[1]: Started session-29.scope - Session 29 of User core. May 16 00:01:16.938344 sshd[4318]: Connection closed by 10.0.0.1 port 57576 May 16 00:01:16.938872 sshd-session[4313]: pam_unix(sshd:session): session closed for user core May 16 00:01:16.943665 systemd[1]: sshd@28-10.0.0.123:22-10.0.0.1:57576.service: Deactivated successfully. May 16 00:01:16.945822 systemd[1]: session-29.scope: Deactivated successfully. May 16 00:01:16.946895 systemd-logind[1469]: Session 29 logged out. Waiting for processes to exit. May 16 00:01:16.947945 systemd-logind[1469]: Removed session 29. May 16 00:01:22.958754 systemd[1]: Started sshd@29-10.0.0.123:22-10.0.0.1:43748.service - OpenSSH per-connection server daemon (10.0.0.1:43748). May 16 00:01:24.893611 kubelet[2600]: E0516 00:01:24.893569 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:34.894268 kubelet[2600]: E0516 00:01:34.894149 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:39.010556 kubelet[2600]: E0516 00:01:39.010425 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:40.893594 kubelet[2600]: E0516 00:01:40.893555 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:49.437785 kubelet[2600]: E0516 00:01:46.509770 2600 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 16 00:01:51.894537 kubelet[2600]: E0516 00:01:51.893972 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:02:01.714359 kubelet[2600]: E0516 00:02:01.714118 2600 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io localhost)" May 16 00:02:20.438626 kernel: hrtimer: interrupt took 4762766 ns May 16 00:02:20.582942 systemd[1]: cri-containerd-c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986.scope: Deactivated successfully. May 16 00:02:20.584000 systemd[1]: cri-containerd-c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986.scope: Consumed 2.218s CPU time, 16.2M memory peak, 0B memory swap peak. May 16 00:02:20.584845 systemd[1]: cri-containerd-8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040.scope: Deactivated successfully. May 16 00:02:20.635445 kubelet[2600]: E0516 00:02:20.635328 2600 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-localhost.183fd8ed78db5160 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:1788fb037218d783b217e4cf7b71e88a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:01:34.889529696 +0000 UTC m=+132.137364328,LastTimestamp:2025-05-16 00:01:34.889529696 +0000 UTC m=+132.137364328,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:02:20.653662 kubelet[2600]: E0516 00:02:20.653009 2600 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" May 16 00:02:20.654722 kubelet[2600]: E0516 00:02:20.654462 2600 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="18.93s" May 16 00:02:20.658396 kubelet[2600]: E0516 00:02:20.658356 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:02:20.660727 kubelet[2600]: E0516 00:02:20.660121 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:02:20.785722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986-rootfs.mount: Deactivated successfully. May 16 00:02:20.940519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040-rootfs.mount: Deactivated successfully. May 16 00:02:23.321196 kubelet[2600]: E0516 00:02:23.321125 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:02:23.894230 kubelet[2600]: E0516 00:02:23.893846 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:02:27.898389 kubelet[2600]: E0516 00:02:27.897786 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:02:35.021331 kubelet[2600]: I0516 00:02:28.988122 2600 status_manager.go:890] "Failed to get status for pod" podUID="1788fb037218d783b217e4cf7b71e88a" pod="kube-system/kube-apiserver-localhost" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" May 16 00:02:35.021331 kubelet[2600]: E0516 00:02:31.265104 2600 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io localhost)" May 16 00:02:35.021848 containerd[1480]: time="2025-05-16T00:02:30.316007405Z" level=info msg="StopContainer for \"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" with timeout 30 (s)" May 16 00:02:35.021848 containerd[1480]: time="2025-05-16T00:02:32.317202031Z" level=error msg="get state for 8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040" error="context deadline exceeded: unknown" May 16 00:02:35.021848 containerd[1480]: time="2025-05-16T00:02:32.317265850Z" level=warning msg="unknown status" status=0 May 16 00:02:35.021848 containerd[1480]: time="2025-05-16T00:02:32.317303732Z" level=info msg="Stop container \"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" with signal terminated" May 16 00:02:35.021848 containerd[1480]: time="2025-05-16T00:02:33.683550396Z" level=error msg="failed to handle container TaskExit event container_id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" pid:3273 exit_status:1 exited_at:{seconds:1747353740 nanos:598820172}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" May 16 00:02:35.021848 containerd[1480]: time="2025-05-16T00:02:33.683618303Z" level=error msg="failed to handle container TaskExit event container_id:\"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\" id:\"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\" pid:2464 exit_status:1 exited_at:{seconds:1747353740 nanos:582449979}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" May 16 00:02:35.219351 containerd[1480]: time="2025-05-16T00:02:35.219269603Z" level=info msg="TaskExit event container_id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" pid:3273 exit_status:1 exited_at:{seconds:1747353740 nanos:598820172}" May 16 00:02:35.830019 systemd[1]: cri-containerd-08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00.scope: Deactivated successfully. May 16 00:02:35.830430 systemd[1]: cri-containerd-08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00.scope: Consumed 4.075s CPU time, 20.2M memory peak, 0B memory swap peak. May 16 00:02:35.862612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00-rootfs.mount: Deactivated successfully. May 16 00:02:37.223679 containerd[1480]: time="2025-05-16T00:02:37.223469862Z" level=error msg="get state for 8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040" error="context deadline exceeded: unknown" May 16 00:02:37.223679 containerd[1480]: time="2025-05-16T00:02:37.223539673Z" level=warning msg="unknown status" status=0 May 16 00:02:39.224674 containerd[1480]: time="2025-05-16T00:02:39.224531630Z" level=error msg="get state for 8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040" error="context deadline exceeded: unknown" May 16 00:02:39.224674 containerd[1480]: time="2025-05-16T00:02:39.224605749Z" level=warning msg="unknown status" status=0 May 16 00:02:41.265422 kubelet[2600]: E0516 00:02:41.265311 2600 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" May 16 00:02:41.265422 kubelet[2600]: I0516 00:02:41.265386 2600 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" May 16 00:02:45.235087 containerd[1480]: time="2025-05-16T00:02:45.234307588Z" level=error msg="Failed to handle backOff event container_id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" pid:3273 exit_status:1 exited_at:{seconds:1747353740 nanos:598820172} for 8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" May 16 00:02:45.235087 containerd[1480]: time="2025-05-16T00:02:45.234724311Z" level=info msg="TaskExit event container_id:\"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\" id:\"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\" pid:2464 exit_status:1 exited_at:{seconds:1747353740 nanos:582449979}" May 16 00:02:50.580917 containerd[1480]: time="2025-05-16T00:02:45.835618090Z" level=error msg="failed to handle container TaskExit event container_id:\"08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00\" id:\"08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00\" pid:2439 exit_status:1 exited_at:{seconds:1747353755 nanos:831135633}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" May 16 00:02:50.580917 containerd[1480]: time="2025-05-16T00:02:47.236033469Z" level=error msg="get state for c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986" error="context deadline exceeded: unknown" May 16 00:02:50.580917 containerd[1480]: time="2025-05-16T00:02:47.236090205Z" level=warning msg="unknown status" status=0 May 16 00:02:50.580917 containerd[1480]: time="2025-05-16T00:02:49.241485082Z" level=error msg="get state for c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986" error="context deadline exceeded: unknown" May 16 00:02:50.580917 containerd[1480]: time="2025-05-16T00:02:49.241537271Z" level=warning msg="unknown status" status=0 May 16 00:02:50.588090 kubelet[2600]: E0516 00:02:47.894440 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:02:50.588090 kubelet[2600]: E0516 00:02:48.895649 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:02:51.266444 kubelet[2600]: E0516 00:02:51.266235 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" May 16 00:02:54.650010 kubelet[2600]: E0516 00:02:54.649463 2600 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{coredns-668d6bf9bc-cwg2r.183fd8d5bdccfd04 kube-system 667 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-668d6bf9bc-cwg2r,UID:9354b0d9-a90e-4197-98e6-f03d0e1c38f6,APIVersion:v1,ResourceVersion:532,FieldPath:,},Reason:DNSConfigForming,Message:Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 23:59:52 +0000 UTC,LastTimestamp:2025-05-16 00:01:34.894126091 +0000 UTC m=+132.141960723,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:02:55.158571 containerd[1480]: time="2025-05-16T00:02:55.158266982Z" level=info msg="StopContainer for \"0491825e2b9ead831fcecc8e3c4167535c1427f728216e529ce779af084d0853\" with timeout 30 (s)" May 16 00:02:55.203007 containerd[1480]: time="2025-05-16T00:02:55.202293626Z" level=info msg="Stop container \"0491825e2b9ead831fcecc8e3c4167535c1427f728216e529ce779af084d0853\" with signal terminated" May 16 00:02:55.259551 containerd[1480]: time="2025-05-16T00:02:55.259138521Z" level=error msg="Failed to handle backOff event container_id:\"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\" id:\"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\" pid:2464 exit_status:1 exited_at:{seconds:1747353740 nanos:582449979} for c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" May 16 00:02:55.259551 containerd[1480]: time="2025-05-16T00:02:55.259198985Z" level=info msg="TaskExit event container_id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" pid:3273 exit_status:1 exited_at:{seconds:1747353740 nanos:598820172}" May 16 00:02:57.262147 containerd[1480]: time="2025-05-16T00:02:57.260328868Z" level=error msg="get state for 8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040" error="context deadline exceeded: unknown" May 16 00:02:57.262147 containerd[1480]: time="2025-05-16T00:02:57.260383360Z" level=warning msg="unknown status" status=0 May 16 00:02:59.261918 containerd[1480]: time="2025-05-16T00:02:59.261577041Z" level=error msg="get state for 8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040" error="context deadline exceeded: unknown" May 16 00:02:59.261918 containerd[1480]: time="2025-05-16T00:02:59.261688400Z" level=warning msg="unknown status" status=0 May 16 00:03:01.469650 kubelet[2600]: E0516 00:03:01.469156 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" May 16 00:03:05.260543 containerd[1480]: time="2025-05-16T00:03:05.260010361Z" level=error msg="Failed to handle backOff event container_id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" pid:3273 exit_status:1 exited_at:{seconds:1747353740 nanos:598820172} for 8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" May 16 00:03:05.260543 containerd[1480]: time="2025-05-16T00:03:05.260083578Z" level=info msg="TaskExit event container_id:\"08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00\" id:\"08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00\" pid:2439 exit_status:1 exited_at:{seconds:1747353755 nanos:831135633}" May 16 00:03:06.896068 kubelet[2600]: E0516 00:03:06.895937 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:03:07.265845 containerd[1480]: time="2025-05-16T00:03:07.265533851Z" level=error msg="get state for 08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00" error="context deadline exceeded: unknown" May 16 00:03:07.265845 containerd[1480]: time="2025-05-16T00:03:07.265584837Z" level=warning msg="unknown status" status=0 May 16 00:03:09.266969 containerd[1480]: time="2025-05-16T00:03:09.266218678Z" level=error msg="get state for 08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00" error="context deadline exceeded: unknown" May 16 00:03:09.266969 containerd[1480]: time="2025-05-16T00:03:09.266279622Z" level=warning msg="unknown status" status=0 May 16 00:03:11.873897 kubelet[2600]: E0516 00:03:11.873011 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" May 16 00:03:13.098114 kubelet[2600]: E0516 00:03:13.098070 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.123:53412->10.0.0.119:2379: read: connection timed out" interval="1.6s" May 16 00:03:15.271590 containerd[1480]: time="2025-05-16T00:03:15.271397659Z" level=error msg="Failed to handle backOff event container_id:\"08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00\" id:\"08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00\" pid:2439 exit_status:1 exited_at:{seconds:1747353755 nanos:831135633} for 08d350be1b4eb69e788aae99673fdb6a02e11c346d15a6badf609fbedd696d00" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" May 16 00:03:15.271590 containerd[1480]: time="2025-05-16T00:03:15.271467931Z" level=info msg="TaskExit event container_id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" pid:3273 exit_status:1 exited_at:{seconds:1747353740 nanos:598820172}" May 16 00:03:17.273045 containerd[1480]: time="2025-05-16T00:03:17.272961972Z" level=error msg="get state for 8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040" error="context deadline exceeded: unknown" May 16 00:03:17.273045 containerd[1480]: time="2025-05-16T00:03:17.273020381Z" level=warning msg="unknown status" status=0 May 16 00:03:19.273539 containerd[1480]: time="2025-05-16T00:03:19.273211374Z" level=error msg="get state for 8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040" error="context deadline exceeded: unknown" May 16 00:03:19.273539 containerd[1480]: time="2025-05-16T00:03:19.273250948Z" level=warning msg="unknown status" status=0 May 16 00:03:24.700584 kubelet[2600]: E0516 00:03:24.699958 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" May 16 00:03:24.899744 kubelet[2600]: E0516 00:03:24.896158 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:03:25.135672 containerd[1480]: time="2025-05-16T00:03:25.135101891Z" level=info msg="StopContainer for \"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\" with timeout 30 (s)" May 16 00:03:25.276463 containerd[1480]: time="2025-05-16T00:03:25.276233094Z" level=error msg="Failed to handle backOff event container_id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" id:\"8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040\" pid:3273 exit_status:1 exited_at:{seconds:1747353740 nanos:598820172} for 8e10e9bbcda425ab4f80e242641cbeb5461aff7f730744b773e1bed7d9ef9040" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" May 16 00:03:25.276463 containerd[1480]: time="2025-05-16T00:03:25.276324335Z" level=info msg="TaskExit event container_id:\"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\" id:\"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\" pid:2464 exit_status:1 exited_at:{seconds:1747353740 nanos:582449979}" May 16 00:03:25.545555 containerd[1480]: time="2025-05-16T00:03:25.545467965Z" level=info msg="Kill container \"0491825e2b9ead831fcecc8e3c4167535c1427f728216e529ce779af084d0853\"" May 16 00:03:27.136820 containerd[1480]: time="2025-05-16T00:03:27.135535069Z" level=error msg="get state for c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986" error="context deadline exceeded: unknown" May 16 00:03:27.136820 containerd[1480]: time="2025-05-16T00:03:27.136622642Z" level=warning msg="unknown status" status=0 May 16 00:03:27.136820 containerd[1480]: time="2025-05-16T00:03:27.136667125Z" level=info msg="Stop container \"c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986\" with signal terminated" May 16 00:03:27.278817 containerd[1480]: time="2025-05-16T00:03:27.278109040Z" level=error msg="get state for c44222dd396d7758f80538e5b15b3de1f1db93318ccb2a6feab7f5c7a6182986" error="context deadline exceeded: unknown" May 16 00:03:27.278817 containerd[1480]: time="2025-05-16T00:03:27.278169904Z" level=warning msg="unknown status" status=0