May 14 00:00:53.943586 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:19:41 -00 2025 May 14 00:00:53.943615 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9290c9b76db63811f0d205969a93d9b54c3ea10aed4e7b51abfb58e812a25e51 May 14 00:00:53.943631 kernel: BIOS-provided physical RAM map: May 14 00:00:53.943640 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 00:00:53.943648 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 00:00:53.943656 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 00:00:53.943666 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 14 00:00:53.943675 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 00:00:53.943683 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 00:00:53.943692 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 00:00:53.943700 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 14 00:00:53.943712 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 00:00:53.943720 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 00:00:53.943729 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 00:00:53.943739 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 00:00:53.943748 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 00:00:53.943758 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 14 00:00:53.943765 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 14 00:00:53.943772 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 14 00:00:53.943779 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 14 00:00:53.943785 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 00:00:53.943792 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 00:00:53.943799 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 00:00:53.943806 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 00:00:53.943813 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 00:00:53.943820 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 00:00:53.943827 kernel: NX (Execute Disable) protection: active May 14 00:00:53.943836 kernel: APIC: Static calls initialized May 14 00:00:53.943845 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 14 00:00:53.943854 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 14 00:00:53.943862 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 14 00:00:53.943868 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 14 00:00:53.943876 kernel: extended physical RAM map: May 14 00:00:53.943885 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 00:00:53.943894 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 00:00:53.943903 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 00:00:53.943912 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 14 00:00:53.943921 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 00:00:53.943931 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 00:00:53.943943 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 00:00:53.943957 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 14 00:00:53.943966 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 14 00:00:53.943977 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 14 00:00:53.943987 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 14 00:00:53.943997 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 14 00:00:53.944010 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 00:00:53.944020 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 00:00:53.944030 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 00:00:53.944040 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 00:00:53.944049 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 00:00:53.944060 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 14 00:00:53.944070 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 14 00:00:53.944081 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 14 00:00:53.944091 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 14 00:00:53.944103 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 00:00:53.944110 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 00:00:53.944117 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 00:00:53.944125 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 00:00:53.944132 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 00:00:53.944139 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 00:00:53.944146 kernel: efi: EFI v2.7 by EDK II May 14 00:00:53.944153 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 14 00:00:53.944160 kernel: random: crng init done May 14 00:00:53.944168 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 14 00:00:53.944175 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 14 00:00:53.944182 kernel: secureboot: Secure boot disabled May 14 00:00:53.944192 kernel: SMBIOS 2.8 present. May 14 00:00:53.944199 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 14 00:00:53.944206 kernel: Hypervisor detected: KVM May 14 00:00:53.944213 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 00:00:53.944220 kernel: kvm-clock: using sched offset of 2789612469 cycles May 14 00:00:53.944228 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 00:00:53.944236 kernel: tsc: Detected 2794.746 MHz processor May 14 00:00:53.944243 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 00:00:53.944251 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 00:00:53.944258 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 14 00:00:53.944269 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 14 00:00:53.944296 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 00:00:53.944303 kernel: Using GB pages for direct mapping May 14 00:00:53.944311 kernel: ACPI: Early table checksum verification disabled May 14 00:00:53.944318 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 14 00:00:53.944326 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 14 00:00:53.944333 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:53.944341 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:53.944348 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 14 00:00:53.944358 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:53.944366 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:53.944373 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:53.944381 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:00:53.944388 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 14 00:00:53.944395 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 14 00:00:53.944403 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 14 00:00:53.944410 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 14 00:00:53.944417 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 14 00:00:53.944427 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 14 00:00:53.944435 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 14 00:00:53.944442 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 14 00:00:53.944449 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 14 00:00:53.944456 kernel: No NUMA configuration found May 14 00:00:53.944464 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 14 00:00:53.944471 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 14 00:00:53.944478 kernel: Zone ranges: May 14 00:00:53.944486 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 00:00:53.944495 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 14 00:00:53.944503 kernel: Normal empty May 14 00:00:53.944510 kernel: Movable zone start for each node May 14 00:00:53.944517 kernel: Early memory node ranges May 14 00:00:53.944525 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 14 00:00:53.944541 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 14 00:00:53.944551 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 14 00:00:53.944560 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 14 00:00:53.944570 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 14 00:00:53.944582 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 14 00:00:53.944591 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 14 00:00:53.944601 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 14 00:00:53.944611 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 14 00:00:53.944620 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 00:00:53.944630 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 14 00:00:53.944649 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 14 00:00:53.944659 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 00:00:53.944666 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 14 00:00:53.944674 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 14 00:00:53.944681 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 14 00:00:53.944689 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 14 00:00:53.944696 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 14 00:00:53.944707 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 00:00:53.944714 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 00:00:53.944722 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 00:00:53.944729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 00:00:53.944739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 00:00:53.944747 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 00:00:53.944755 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 00:00:53.944762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 00:00:53.944770 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 00:00:53.944777 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 00:00:53.944785 kernel: TSC deadline timer available May 14 00:00:53.944792 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 14 00:00:53.944800 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 00:00:53.944807 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 00:00:53.944817 kernel: kvm-guest: setup PV sched yield May 14 00:00:53.944825 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 14 00:00:53.944832 kernel: Booting paravirtualized kernel on KVM May 14 00:00:53.944840 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 00:00:53.944848 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 14 00:00:53.944856 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 14 00:00:53.944863 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 14 00:00:53.944871 kernel: pcpu-alloc: [0] 0 1 2 3 May 14 00:00:53.944878 kernel: kvm-guest: PV spinlocks enabled May 14 00:00:53.944889 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 00:00:53.944897 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9290c9b76db63811f0d205969a93d9b54c3ea10aed4e7b51abfb58e812a25e51 May 14 00:00:53.944905 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:00:53.944913 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:00:53.944921 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:00:53.944928 kernel: Fallback order for Node 0: 0 May 14 00:00:53.944936 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 14 00:00:53.944943 kernel: Policy zone: DMA32 May 14 00:00:53.944953 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:00:53.944962 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43480K init, 1596K bss, 177824K reserved, 0K cma-reserved) May 14 00:00:53.944969 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 00:00:53.944977 kernel: ftrace: allocating 37918 entries in 149 pages May 14 00:00:53.944984 kernel: ftrace: allocated 149 pages with 4 groups May 14 00:00:53.944992 kernel: Dynamic Preempt: voluntary May 14 00:00:53.944999 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:00:53.945008 kernel: rcu: RCU event tracing is enabled. May 14 00:00:53.945016 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 00:00:53.945026 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:00:53.945034 kernel: Rude variant of Tasks RCU enabled. May 14 00:00:53.945041 kernel: Tracing variant of Tasks RCU enabled. May 14 00:00:53.945049 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:00:53.945056 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 00:00:53.945064 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 14 00:00:53.945072 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 00:00:53.945079 kernel: Console: colour dummy device 80x25 May 14 00:00:53.945087 kernel: printk: console [ttyS0] enabled May 14 00:00:53.945097 kernel: ACPI: Core revision 20230628 May 14 00:00:53.945105 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 00:00:53.945112 kernel: APIC: Switch to symmetric I/O mode setup May 14 00:00:53.945120 kernel: x2apic enabled May 14 00:00:53.945128 kernel: APIC: Switched APIC routing to: physical x2apic May 14 00:00:53.945135 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 00:00:53.945143 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 00:00:53.945151 kernel: kvm-guest: setup PV IPIs May 14 00:00:53.945158 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 00:00:53.945168 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 14 00:00:53.945176 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 14 00:00:53.945184 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 00:00:53.945191 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 00:00:53.945199 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 00:00:53.945206 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 00:00:53.945214 kernel: Spectre V2 : Mitigation: Retpolines May 14 00:00:53.945222 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 14 00:00:53.945229 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 00:00:53.945239 kernel: RETBleed: Mitigation: untrained return thunk May 14 00:00:53.945247 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 00:00:53.945255 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 00:00:53.945262 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 00:00:53.945281 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 00:00:53.945288 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 00:00:53.945296 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 00:00:53.945304 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 00:00:53.945314 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 00:00:53.945321 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 00:00:53.945329 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 00:00:53.945337 kernel: Freeing SMP alternatives memory: 32K May 14 00:00:53.945344 kernel: pid_max: default: 32768 minimum: 301 May 14 00:00:53.945352 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 00:00:53.945360 kernel: landlock: Up and running. May 14 00:00:53.945367 kernel: SELinux: Initializing. May 14 00:00:53.945375 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:00:53.945385 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:00:53.945393 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 00:00:53.945400 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:00:53.945408 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:00:53.945416 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:00:53.945423 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 00:00:53.945431 kernel: ... version: 0 May 14 00:00:53.945438 kernel: ... bit width: 48 May 14 00:00:53.945446 kernel: ... generic registers: 6 May 14 00:00:53.945457 kernel: ... value mask: 0000ffffffffffff May 14 00:00:53.945464 kernel: ... max period: 00007fffffffffff May 14 00:00:53.945472 kernel: ... fixed-purpose events: 0 May 14 00:00:53.945479 kernel: ... event mask: 000000000000003f May 14 00:00:53.945487 kernel: signal: max sigframe size: 1776 May 14 00:00:53.945495 kernel: rcu: Hierarchical SRCU implementation. May 14 00:00:53.945503 kernel: rcu: Max phase no-delay instances is 400. May 14 00:00:53.945510 kernel: smp: Bringing up secondary CPUs ... May 14 00:00:53.945518 kernel: smpboot: x86: Booting SMP configuration: May 14 00:00:53.945527 kernel: .... node #0, CPUs: #1 #2 #3 May 14 00:00:53.945544 kernel: smp: Brought up 1 node, 4 CPUs May 14 00:00:53.945554 kernel: smpboot: Max logical packages: 1 May 14 00:00:53.945564 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 14 00:00:53.945574 kernel: devtmpfs: initialized May 14 00:00:53.945584 kernel: x86/mm: Memory block size: 128MB May 14 00:00:53.945594 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 14 00:00:53.945604 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 14 00:00:53.945614 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 14 00:00:53.945623 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 14 00:00:53.945634 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 14 00:00:53.945642 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 14 00:00:53.945649 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:00:53.945657 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 00:00:53.945665 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:00:53.945672 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:00:53.945680 kernel: audit: initializing netlink subsys (disabled) May 14 00:00:53.945688 kernel: audit: type=2000 audit(1747180853.402:1): state=initialized audit_enabled=0 res=1 May 14 00:00:53.945698 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:00:53.945706 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 00:00:53.945713 kernel: cpuidle: using governor menu May 14 00:00:53.945721 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:00:53.945728 kernel: dca service started, version 1.12.1 May 14 00:00:53.945736 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 14 00:00:53.945744 kernel: PCI: Using configuration type 1 for base access May 14 00:00:53.945751 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 00:00:53.945759 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:00:53.945769 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 00:00:53.945777 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:00:53.945784 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 00:00:53.945792 kernel: ACPI: Added _OSI(Module Device) May 14 00:00:53.945799 kernel: ACPI: Added _OSI(Processor Device) May 14 00:00:53.945807 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:00:53.945814 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:00:53.945822 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:00:53.945830 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 14 00:00:53.945840 kernel: ACPI: Interpreter enabled May 14 00:00:53.945847 kernel: ACPI: PM: (supports S0 S3 S5) May 14 00:00:53.945855 kernel: ACPI: Using IOAPIC for interrupt routing May 14 00:00:53.945863 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 00:00:53.945871 kernel: PCI: Using E820 reservations for host bridge windows May 14 00:00:53.945878 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 00:00:53.945886 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:00:53.946069 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:00:53.946207 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 00:00:53.946350 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 00:00:53.946361 kernel: PCI host bridge to bus 0000:00 May 14 00:00:53.946490 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 00:00:53.946631 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 00:00:53.946749 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 00:00:53.946864 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 14 00:00:53.946986 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 14 00:00:53.947100 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 14 00:00:53.947214 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:00:53.947384 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 14 00:00:53.947544 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 14 00:00:53.947706 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 14 00:00:53.947852 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 14 00:00:53.947987 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 14 00:00:53.948112 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 14 00:00:53.948237 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 00:00:53.948403 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 14 00:00:53.948582 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 14 00:00:53.948727 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 14 00:00:53.948858 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 14 00:00:53.948992 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 14 00:00:53.949119 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 14 00:00:53.949245 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 14 00:00:53.949387 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 14 00:00:53.949545 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 14 00:00:53.949709 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 14 00:00:53.949872 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 14 00:00:53.950004 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 14 00:00:53.950129 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 14 00:00:53.950311 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 14 00:00:53.950467 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 00:00:53.950628 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 14 00:00:53.950757 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 14 00:00:53.950887 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 14 00:00:53.951020 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 14 00:00:53.951174 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 14 00:00:53.951191 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 00:00:53.951202 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 00:00:53.951213 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 00:00:53.951223 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 00:00:53.951251 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 00:00:53.951284 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 00:00:53.951320 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 00:00:53.951339 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 00:00:53.951364 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 00:00:53.951375 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 00:00:53.951403 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 00:00:53.951429 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 00:00:53.951439 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 00:00:53.951472 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 00:00:53.951482 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 00:00:53.951493 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 00:00:53.951506 kernel: iommu: Default domain type: Translated May 14 00:00:53.951518 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 00:00:53.951530 kernel: efivars: Registered efivars operations May 14 00:00:53.951550 kernel: PCI: Using ACPI for IRQ routing May 14 00:00:53.951561 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 00:00:53.951573 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 14 00:00:53.951583 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 14 00:00:53.951597 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 14 00:00:53.951609 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 14 00:00:53.951619 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 14 00:00:53.951630 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 14 00:00:53.951640 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 14 00:00:53.951651 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 14 00:00:53.951812 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 00:00:53.951941 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 00:00:53.952077 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 00:00:53.952091 kernel: vgaarb: loaded May 14 00:00:53.952102 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 00:00:53.952113 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 00:00:53.952126 kernel: clocksource: Switched to clocksource kvm-clock May 14 00:00:53.952136 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:00:53.952147 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:00:53.952159 kernel: pnp: PnP ACPI init May 14 00:00:53.952357 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 14 00:00:53.952375 kernel: pnp: PnP ACPI: found 6 devices May 14 00:00:53.952383 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 00:00:53.952391 kernel: NET: Registered PF_INET protocol family May 14 00:00:53.952399 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 00:00:53.952424 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 00:00:53.952435 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:00:53.952445 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:00:53.952453 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 00:00:53.952464 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 00:00:53.952472 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:00:53.952480 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:00:53.952488 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:00:53.952497 kernel: NET: Registered PF_XDP protocol family May 14 00:00:53.952659 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 14 00:00:53.952826 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 14 00:00:53.952963 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 00:00:53.953083 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 00:00:53.953207 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 00:00:53.953416 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 14 00:00:53.953577 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 14 00:00:53.953726 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 14 00:00:53.953738 kernel: PCI: CLS 0 bytes, default 64 May 14 00:00:53.953746 kernel: Initialise system trusted keyrings May 14 00:00:53.953754 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 00:00:53.953767 kernel: Key type asymmetric registered May 14 00:00:53.953775 kernel: Asymmetric key parser 'x509' registered May 14 00:00:53.953783 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 14 00:00:53.953791 kernel: io scheduler mq-deadline registered May 14 00:00:53.953799 kernel: io scheduler kyber registered May 14 00:00:53.953806 kernel: io scheduler bfq registered May 14 00:00:53.953814 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 00:00:53.953823 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 00:00:53.953831 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 00:00:53.953842 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 14 00:00:53.953852 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:00:53.953861 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 00:00:53.953869 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 00:00:53.953877 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 00:00:53.953885 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 00:00:53.954020 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 00:00:53.954033 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 00:00:53.954182 kernel: rtc_cmos 00:04: registered as rtc0 May 14 00:00:53.954341 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T00:00:53 UTC (1747180853) May 14 00:00:53.954464 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 14 00:00:53.954475 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 00:00:53.954483 kernel: efifb: probing for efifb May 14 00:00:53.954491 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 14 00:00:53.954503 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 14 00:00:53.954511 kernel: efifb: scrolling: redraw May 14 00:00:53.954519 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 00:00:53.954527 kernel: Console: switching to colour frame buffer device 160x50 May 14 00:00:53.954546 kernel: fb0: EFI VGA frame buffer device May 14 00:00:53.954557 kernel: pstore: Using crash dump compression: deflate May 14 00:00:53.954567 kernel: pstore: Registered efi_pstore as persistent store backend May 14 00:00:53.954578 kernel: NET: Registered PF_INET6 protocol family May 14 00:00:53.954589 kernel: Segment Routing with IPv6 May 14 00:00:53.954603 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:00:53.954614 kernel: NET: Registered PF_PACKET protocol family May 14 00:00:53.954624 kernel: Key type dns_resolver registered May 14 00:00:53.954633 kernel: IPI shorthand broadcast: enabled May 14 00:00:53.954641 kernel: sched_clock: Marking stable (727002910, 182896046)->(927911529, -18012573) May 14 00:00:53.954649 kernel: registered taskstats version 1 May 14 00:00:53.954657 kernel: Loading compiled-in X.509 certificates May 14 00:00:53.954665 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 50ddd1b04864f80ac4ca221f8647fbbda919e0fd' May 14 00:00:53.954673 kernel: Key type .fscrypt registered May 14 00:00:53.954686 kernel: Key type fscrypt-provisioning registered May 14 00:00:53.954694 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:00:53.954702 kernel: ima: Allocated hash algorithm: sha1 May 14 00:00:53.954709 kernel: ima: No architecture policies found May 14 00:00:53.954717 kernel: clk: Disabling unused clocks May 14 00:00:53.954726 kernel: Freeing unused kernel image (initmem) memory: 43480K May 14 00:00:53.954734 kernel: Write protecting the kernel read-only data: 38912k May 14 00:00:53.954742 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 14 00:00:53.954750 kernel: Run /init as init process May 14 00:00:53.954760 kernel: with arguments: May 14 00:00:53.954768 kernel: /init May 14 00:00:53.954776 kernel: with environment: May 14 00:00:53.954784 kernel: HOME=/ May 14 00:00:53.954791 kernel: TERM=linux May 14 00:00:53.954799 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:00:53.954808 systemd[1]: Successfully made /usr/ read-only. May 14 00:00:53.954819 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:00:53.954831 systemd[1]: Detected virtualization kvm. May 14 00:00:53.954848 systemd[1]: Detected architecture x86-64. May 14 00:00:53.954865 systemd[1]: Running in initrd. May 14 00:00:53.954880 systemd[1]: No hostname configured, using default hostname. May 14 00:00:53.954903 systemd[1]: Hostname set to . May 14 00:00:53.954912 systemd[1]: Initializing machine ID from VM UUID. May 14 00:00:53.954920 systemd[1]: Queued start job for default target initrd.target. May 14 00:00:53.954929 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:00:53.954940 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:00:53.954949 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 00:00:53.954959 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:00:53.954970 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 00:00:53.954989 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 00:00:53.955003 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 00:00:53.955018 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 00:00:53.955031 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:00:53.955044 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:00:53.955055 systemd[1]: Reached target paths.target - Path Units. May 14 00:00:53.955063 systemd[1]: Reached target slices.target - Slice Units. May 14 00:00:53.955072 systemd[1]: Reached target swap.target - Swaps. May 14 00:00:53.955081 systemd[1]: Reached target timers.target - Timer Units. May 14 00:00:53.955093 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:00:53.955107 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:00:53.955119 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 00:00:53.955128 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 00:00:53.955136 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:00:53.955145 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:00:53.955154 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:00:53.955163 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:00:53.955174 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 00:00:53.955183 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:00:53.955192 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 00:00:53.955203 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:00:53.955215 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:00:53.955227 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:00:53.955239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:53.955253 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 00:00:53.955266 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:00:53.955299 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:00:53.955337 systemd-journald[192]: Collecting audit messages is disabled. May 14 00:00:53.955362 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 00:00:53.955371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:53.955380 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:00:53.955389 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:00:53.955397 systemd-journald[192]: Journal started May 14 00:00:53.955417 systemd-journald[192]: Runtime Journal (/run/log/journal/9c0c5f25318243068a9271f8f8d17dc5) is 6M, max 48.2M, 42.2M free. May 14 00:00:53.953728 systemd-modules-load[195]: Inserted module 'overlay' May 14 00:00:53.958371 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:00:53.962663 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:00:53.965467 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:00:53.975551 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:00:53.976402 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:00:53.990509 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 00:00:53.991358 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:00:54.000305 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:00:54.002522 systemd-modules-load[195]: Inserted module 'br_netfilter' May 14 00:00:54.003527 kernel: Bridge firewalling registered May 14 00:00:54.004469 dracut-cmdline[222]: dracut-dracut-053 May 14 00:00:54.005378 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:00:54.007753 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9290c9b76db63811f0d205969a93d9b54c3ea10aed4e7b51abfb58e812a25e51 May 14 00:00:54.014126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:00:54.042459 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:00:54.050423 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:00:54.088124 systemd-resolved[270]: Positive Trust Anchors: May 14 00:00:54.088143 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:00:54.088183 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:00:54.091016 systemd-resolved[270]: Defaulting to hostname 'linux'. May 14 00:00:54.092306 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:00:54.098124 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:00:54.118316 kernel: SCSI subsystem initialized May 14 00:00:54.128295 kernel: Loading iSCSI transport class v2.0-870. May 14 00:00:54.140321 kernel: iscsi: registered transport (tcp) May 14 00:00:54.166309 kernel: iscsi: registered transport (qla4xxx) May 14 00:00:54.166376 kernel: QLogic iSCSI HBA Driver May 14 00:00:54.217891 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 00:00:54.233559 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 00:00:54.258539 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:00:54.258604 kernel: device-mapper: uevent: version 1.0.3 May 14 00:00:54.259662 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 00:00:54.304317 kernel: raid6: avx2x4 gen() 26679 MB/s May 14 00:00:54.321312 kernel: raid6: avx2x2 gen() 22379 MB/s May 14 00:00:54.338505 kernel: raid6: avx2x1 gen() 19932 MB/s May 14 00:00:54.338541 kernel: raid6: using algorithm avx2x4 gen() 26679 MB/s May 14 00:00:54.356507 kernel: raid6: .... xor() 7847 MB/s, rmw enabled May 14 00:00:54.356583 kernel: raid6: using avx2x2 recovery algorithm May 14 00:00:54.378309 kernel: xor: automatically using best checksumming function avx May 14 00:00:54.536312 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 00:00:54.550055 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 00:00:54.566430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:00:54.583006 systemd-udevd[415]: Using default interface naming scheme 'v255'. May 14 00:00:54.588583 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:00:54.598551 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 00:00:54.611614 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation May 14 00:00:54.647571 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:00:54.666392 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:00:54.734712 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:00:54.744600 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 00:00:54.755792 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 00:00:54.757776 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:00:54.760328 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:00:54.762719 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:00:54.773298 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 14 00:00:54.778588 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 00:00:54.774466 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 00:00:54.789132 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:00:54.789172 kernel: GPT:9289727 != 19775487 May 14 00:00:54.789185 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:00:54.789195 kernel: GPT:9289727 != 19775487 May 14 00:00:54.789205 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:00:54.789216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:00:54.788137 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 00:00:54.797579 kernel: cryptd: max_cpu_qlen set to 1000 May 14 00:00:54.812298 kernel: libata version 3.00 loaded. May 14 00:00:54.818565 kernel: AVX2 version of gcm_enc/dec engaged. May 14 00:00:54.818600 kernel: AES CTR mode by8 optimization enabled May 14 00:00:54.820567 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:00:54.820787 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:00:54.826363 kernel: ahci 0000:00:1f.2: version 3.0 May 14 00:00:54.826570 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 00:00:54.824266 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:00:54.831057 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:00:54.835129 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 14 00:00:54.835337 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 00:00:54.831246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:54.839646 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:54.845214 kernel: BTRFS: device fsid 87997324-54dc-4f74-bc1a-3f18f5f2e9f7 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (463) May 14 00:00:54.845235 kernel: scsi host0: ahci May 14 00:00:54.845681 kernel: scsi host1: ahci May 14 00:00:54.848224 kernel: scsi host2: ahci May 14 00:00:54.850373 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (460) May 14 00:00:54.850425 kernel: scsi host3: ahci May 14 00:00:54.850572 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:54.855582 kernel: scsi host4: ahci May 14 00:00:54.859620 kernel: scsi host5: ahci May 14 00:00:54.859863 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 14 00:00:54.859888 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 14 00:00:54.861294 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 14 00:00:54.861320 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 14 00:00:54.863034 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 14 00:00:54.863060 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 14 00:00:54.868888 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 00:00:54.882961 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 00:00:54.914616 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 00:00:54.916142 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 00:00:54.926712 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 00:00:54.944443 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 00:00:54.945729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:00:54.945808 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:54.948176 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:54.950223 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:00:54.951885 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 00:00:54.966986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:00:54.969230 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:00:54.988524 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:00:55.033368 disk-uuid[557]: Primary Header is updated. May 14 00:00:55.033368 disk-uuid[557]: Secondary Entries is updated. May 14 00:00:55.033368 disk-uuid[557]: Secondary Header is updated. May 14 00:00:55.038320 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:00:55.043317 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:00:55.176301 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 00:00:55.176366 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 00:00:55.178020 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 00:00:55.178294 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 00:00:55.179294 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 00:00:55.180292 kernel: ata3.00: applying bridge limits May 14 00:00:55.180307 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 00:00:55.181309 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 00:00:55.182332 kernel: ata3.00: configured for UDMA/100 May 14 00:00:55.183294 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 00:00:55.238305 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 00:00:55.238583 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 00:00:55.252299 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 00:00:56.047304 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:00:56.048143 disk-uuid[572]: The operation has completed successfully. May 14 00:00:56.081229 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:00:56.081440 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 00:00:56.131510 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 00:00:56.135377 sh[600]: Success May 14 00:00:56.149311 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 14 00:00:56.190310 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 00:00:56.202230 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 00:00:56.205025 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 00:00:56.217962 kernel: BTRFS info (device dm-0): first mount of filesystem 87997324-54dc-4f74-bc1a-3f18f5f2e9f7 May 14 00:00:56.218023 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 00:00:56.218037 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 00:00:56.219043 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 00:00:56.219842 kernel: BTRFS info (device dm-0): using free space tree May 14 00:00:56.226159 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 00:00:56.227318 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 00:00:56.242534 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 00:00:56.261557 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 00:00:56.285103 kernel: BTRFS info (device vda6): first mount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 14 00:00:56.285167 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:00:56.285182 kernel: BTRFS info (device vda6): using free space tree May 14 00:00:56.311316 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:00:56.317317 kernel: BTRFS info (device vda6): last unmount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 14 00:00:56.379592 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:00:56.399594 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:00:56.427259 systemd-networkd[776]: lo: Link UP May 14 00:00:56.427283 systemd-networkd[776]: lo: Gained carrier May 14 00:00:56.429008 systemd-networkd[776]: Enumeration completed May 14 00:00:56.429403 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:00:56.429409 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:00:56.433546 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:00:56.437606 systemd[1]: Reached target network.target - Network. May 14 00:00:56.462721 systemd-networkd[776]: eth0: Link UP May 14 00:00:56.462731 systemd-networkd[776]: eth0: Gained carrier May 14 00:00:56.462744 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:00:56.491399 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:00:56.496973 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 00:00:56.507687 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 00:00:56.654110 ignition[781]: Ignition 2.20.0 May 14 00:00:56.654124 ignition[781]: Stage: fetch-offline May 14 00:00:56.654165 ignition[781]: no configs at "/usr/lib/ignition/base.d" May 14 00:00:56.654175 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:56.654349 ignition[781]: parsed url from cmdline: "" May 14 00:00:56.654355 ignition[781]: no config URL provided May 14 00:00:56.654361 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:00:56.654373 ignition[781]: no config at "/usr/lib/ignition/user.ign" May 14 00:00:56.654407 ignition[781]: op(1): [started] loading QEMU firmware config module May 14 00:00:56.654418 ignition[781]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 00:00:56.670334 ignition[781]: op(1): [finished] loading QEMU firmware config module May 14 00:00:56.710885 ignition[781]: parsing config with SHA512: e53fd32aaec7d729712cee033471a1e7f595f3ea0357ef4de97f0820a33783923a034f8660a7516790d5e4f28fc194bac82ab848de75d3dbb1a70673c13def76 May 14 00:00:56.716116 unknown[781]: fetched base config from "system" May 14 00:00:56.716134 unknown[781]: fetched user config from "qemu" May 14 00:00:56.716708 ignition[781]: fetch-offline: fetch-offline passed May 14 00:00:56.719527 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:00:56.716798 ignition[781]: Ignition finished successfully May 14 00:00:56.721258 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:00:56.731503 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 00:00:56.753977 ignition[792]: Ignition 2.20.0 May 14 00:00:56.753991 ignition[792]: Stage: kargs May 14 00:00:56.754186 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 14 00:00:56.754198 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:56.755196 ignition[792]: kargs: kargs passed May 14 00:00:56.755257 ignition[792]: Ignition finished successfully May 14 00:00:56.759953 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 00:00:56.766586 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 00:00:56.806262 ignition[801]: Ignition 2.20.0 May 14 00:00:56.806286 ignition[801]: Stage: disks May 14 00:00:56.806461 ignition[801]: no configs at "/usr/lib/ignition/base.d" May 14 00:00:56.806483 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:56.807290 ignition[801]: disks: disks passed May 14 00:00:56.807344 ignition[801]: Ignition finished successfully May 14 00:00:56.812047 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 00:00:56.815287 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 00:00:56.817706 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 00:00:56.820307 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:00:56.821489 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:00:56.824694 systemd[1]: Reached target basic.target - Basic System. May 14 00:00:56.837500 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 00:00:56.854803 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 00:00:56.864867 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 00:00:56.874574 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 00:00:56.979305 kernel: EXT4-fs (vda9): mounted filesystem cf173df9-f79a-4e29-be52-c2936b0d4e57 r/w with ordered data mode. Quota mode: none. May 14 00:00:56.979780 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 00:00:56.982482 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 00:00:56.998388 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:00:57.011231 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 00:00:57.013666 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 00:00:57.019753 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (819) May 14 00:00:57.019778 kernel: BTRFS info (device vda6): first mount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 14 00:00:57.019791 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:00:57.019804 kernel: BTRFS info (device vda6): using free space tree May 14 00:00:57.013726 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:00:57.019761 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:00:57.025455 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:00:57.027150 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:00:57.029146 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 00:00:57.038446 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 00:00:57.072587 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:00:57.090090 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory May 14 00:00:57.093955 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:00:57.098017 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:00:57.205147 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 00:00:57.213421 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 00:00:57.216876 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 00:00:57.221510 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 00:00:57.222778 kernel: BTRFS info (device vda6): last unmount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 14 00:00:57.243022 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 00:00:57.423146 ignition[936]: INFO : Ignition 2.20.0 May 14 00:00:57.423146 ignition[936]: INFO : Stage: mount May 14 00:00:57.449475 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:00:57.449475 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:57.449475 ignition[936]: INFO : mount: mount passed May 14 00:00:57.449475 ignition[936]: INFO : Ignition finished successfully May 14 00:00:57.426775 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 00:00:57.457500 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 00:00:57.466149 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:00:57.488320 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (945) May 14 00:00:57.490519 kernel: BTRFS info (device vda6): first mount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 14 00:00:57.490545 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:00:57.490560 kernel: BTRFS info (device vda6): using free space tree May 14 00:00:57.494291 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:00:57.495669 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:00:57.531729 ignition[962]: INFO : Ignition 2.20.0 May 14 00:00:57.531729 ignition[962]: INFO : Stage: files May 14 00:00:57.533810 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:00:57.533810 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:57.533810 ignition[962]: DEBUG : files: compiled without relabeling support, skipping May 14 00:00:57.533810 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:00:57.533810 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:00:57.549676 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:00:57.549676 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:00:57.549676 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:00:57.549676 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 00:00:57.549676 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 00:00:57.537549 unknown[962]: wrote ssh authorized keys file for user: core May 14 00:00:57.648083 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 00:00:57.793545 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 00:00:57.793545 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:00:57.798404 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 14 00:00:57.953557 systemd-networkd[776]: eth0: Gained IPv6LL May 14 00:00:58.138989 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 00:00:58.368865 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 00:00:58.371463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 14 00:00:58.674562 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 00:00:59.198423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 00:00:59.198423 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 00:00:59.202793 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:00:59.202793 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:00:59.202793 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 00:00:59.202793 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 00:00:59.202793 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:00:59.202793 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:00:59.202793 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 00:00:59.202793 ignition[962]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 14 00:00:59.227907 ignition[962]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:00:59.240383 ignition[962]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:00:59.251292 ignition[962]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 14 00:00:59.251292 ignition[962]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 14 00:00:59.251292 ignition[962]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:00:59.251292 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:00:59.251292 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:00:59.251292 ignition[962]: INFO : files: files passed May 14 00:00:59.251292 ignition[962]: INFO : Ignition finished successfully May 14 00:00:59.243835 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 00:00:59.263446 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 00:00:59.265434 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 00:00:59.267478 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:00:59.267618 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 00:00:59.277181 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory May 14 00:00:59.280161 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:00:59.281982 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 00:00:59.284809 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:00:59.283541 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:00:59.285092 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 00:00:59.295594 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 00:00:59.322776 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:00:59.322954 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 00:00:59.331118 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 00:00:59.332921 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 00:00:59.335236 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 00:00:59.336166 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 00:00:59.390864 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:00:59.403678 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 00:00:59.531756 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 00:00:59.532165 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:00:59.534815 systemd[1]: Stopped target timers.target - Timer Units. May 14 00:00:59.537157 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:00:59.537358 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:00:59.539717 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 00:00:59.540176 systemd[1]: Stopped target basic.target - Basic System. May 14 00:00:59.540784 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 00:00:59.541184 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:00:59.541819 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 00:00:59.542217 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 00:00:59.542810 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:00:59.543211 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 00:00:59.543785 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 00:00:59.544150 systemd[1]: Stopped target swap.target - Swaps. May 14 00:00:59.544656 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:00:59.544789 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 00:00:59.602860 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 00:00:59.606608 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:00:59.611454 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 00:00:59.611631 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:00:59.612368 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:00:59.612551 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 00:00:59.613116 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:00:59.613268 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:00:59.622057 systemd[1]: Stopped target paths.target - Path Units. May 14 00:00:59.624981 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:00:59.651506 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:00:59.655311 systemd[1]: Stopped target slices.target - Slice Units. May 14 00:00:59.655920 systemd[1]: Stopped target sockets.target - Socket Units. May 14 00:00:59.664159 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:00:59.664357 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:00:59.666695 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:00:59.666822 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:00:59.705374 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:00:59.705604 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:00:59.707458 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:00:59.707674 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 00:00:59.725787 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 00:00:59.739523 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:00:59.739891 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:00:59.746887 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 00:00:59.749740 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:00:59.750029 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:00:59.751838 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:00:59.752023 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:00:59.795786 ignition[1016]: INFO : Ignition 2.20.0 May 14 00:00:59.795786 ignition[1016]: INFO : Stage: umount May 14 00:00:59.795786 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:00:59.795786 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:00:59.795786 ignition[1016]: INFO : umount: umount passed May 14 00:00:59.795786 ignition[1016]: INFO : Ignition finished successfully May 14 00:00:59.760563 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:00:59.760705 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 00:00:59.790262 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:00:59.790458 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 00:00:59.801160 systemd[1]: Stopped target network.target - Network. May 14 00:00:59.820790 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:00:59.821797 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 00:00:59.863052 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:00:59.863159 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 00:00:59.867237 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:00:59.867369 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 00:00:59.875653 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 00:00:59.875763 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 00:00:59.878760 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 00:00:59.881684 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 00:00:59.884119 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:00:59.887947 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:00:59.888121 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 00:00:59.895961 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 00:00:59.896949 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 00:00:59.897067 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:00:59.912943 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 00:00:59.928426 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:00:59.928573 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 00:00:59.935558 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 00:00:59.935893 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:00:59.935950 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 00:00:59.954505 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 00:00:59.955928 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:00:59.956036 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:00:59.960123 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:00:59.960239 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:00:59.970709 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:00:59.970835 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 00:00:59.979761 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:00:59.985816 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:01:00.003262 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:01:00.003690 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:01:00.004756 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:01:00.004900 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 00:01:00.010714 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:01:00.010829 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 00:01:00.038266 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:01:00.038370 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:01:00.042157 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:01:00.042257 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 00:01:00.100111 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:01:00.100253 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 00:01:00.101873 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:01:00.101949 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:01:00.163915 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 00:01:00.191357 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 00:01:00.191523 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:01:00.197751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:01:00.197852 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:01:00.218075 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:01:00.218410 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 00:01:00.412504 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:01:00.414054 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 00:01:00.421611 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 00:01:00.430862 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:01:00.431001 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 00:01:00.444821 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 00:01:00.463444 systemd[1]: Switching root. May 14 00:01:00.511694 systemd-journald[192]: Journal stopped May 14 00:01:02.545479 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 14 00:01:02.545568 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:01:02.545590 kernel: SELinux: policy capability open_perms=1 May 14 00:01:02.545602 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:01:02.545613 kernel: SELinux: policy capability always_check_network=0 May 14 00:01:02.545632 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:01:02.545644 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:01:02.545659 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:01:02.545671 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:01:02.545682 kernel: audit: type=1403 audit(1747180861.244:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:01:02.545701 systemd[1]: Successfully loaded SELinux policy in 60.148ms. May 14 00:01:02.545726 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 19.571ms. May 14 00:01:02.545740 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:01:02.545752 systemd[1]: Detected virtualization kvm. May 14 00:01:02.545765 systemd[1]: Detected architecture x86-64. May 14 00:01:02.545780 systemd[1]: Detected first boot. May 14 00:01:02.545805 systemd[1]: Initializing machine ID from VM UUID. May 14 00:01:02.545818 zram_generator::config[1062]: No configuration found. May 14 00:01:02.545831 kernel: Guest personality initialized and is inactive May 14 00:01:02.545842 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 00:01:02.545854 kernel: Initialized host personality May 14 00:01:02.545865 kernel: NET: Registered PF_VSOCK protocol family May 14 00:01:02.545877 systemd[1]: Populated /etc with preset unit settings. May 14 00:01:02.545890 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 00:01:02.545905 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:01:02.545917 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 00:01:02.545930 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:01:02.545942 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 00:01:02.545954 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 00:01:02.545972 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 00:01:02.545984 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 00:01:02.545996 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 00:01:02.546011 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 00:01:02.546024 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 00:01:02.546036 systemd[1]: Created slice user.slice - User and Session Slice. May 14 00:01:02.546049 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:01:02.546062 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:01:02.546082 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 00:01:02.546094 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 00:01:02.546107 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 00:01:02.546120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:01:02.546135 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 00:01:02.546148 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:01:02.546160 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 00:01:02.546172 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 00:01:02.546185 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 00:01:02.546197 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 00:01:02.546209 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:01:02.546222 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:01:02.546237 systemd[1]: Reached target slices.target - Slice Units. May 14 00:01:02.546249 systemd[1]: Reached target swap.target - Swaps. May 14 00:01:02.546261 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 00:01:02.546293 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 00:01:02.546306 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 00:01:02.546318 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:01:02.546331 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:01:02.546358 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:01:02.546373 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 00:01:02.546404 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 00:01:02.546419 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 00:01:02.546439 systemd[1]: Mounting media.mount - External Media Directory... May 14 00:01:02.546455 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:01:02.546471 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 00:01:02.546487 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 00:01:02.546502 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 00:01:02.546520 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:01:02.546540 systemd[1]: Reached target machines.target - Containers. May 14 00:01:02.546557 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 00:01:02.546572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:01:02.546587 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:01:02.546602 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 00:01:02.546618 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:01:02.546634 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:01:02.546649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:01:02.546665 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 00:01:02.546684 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:01:02.546701 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:01:02.546718 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:01:02.546734 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 00:01:02.546764 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:01:02.546781 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:01:02.546798 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:01:02.546814 kernel: fuse: init (API version 7.39) May 14 00:01:02.546835 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:01:02.546852 kernel: loop: module loaded May 14 00:01:02.546869 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:01:02.546885 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 00:01:02.546903 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 00:01:02.546920 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 00:01:02.546938 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:01:02.546959 kernel: ACPI: bus type drm_connector registered May 14 00:01:02.546975 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:01:02.546992 systemd[1]: Stopped verity-setup.service. May 14 00:01:02.547009 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:01:02.547026 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 00:01:02.547043 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 00:01:02.547099 systemd-journald[1133]: Collecting audit messages is disabled. May 14 00:01:02.547145 systemd[1]: Mounted media.mount - External Media Directory. May 14 00:01:02.547165 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 00:01:02.547182 systemd-journald[1133]: Journal started May 14 00:01:02.547223 systemd-journald[1133]: Runtime Journal (/run/log/journal/9c0c5f25318243068a9271f8f8d17dc5) is 6M, max 48.2M, 42.2M free. May 14 00:01:02.109295 systemd[1]: Queued start job for default target multi-user.target. May 14 00:01:02.131383 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 00:01:02.132413 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:01:02.133807 systemd[1]: systemd-journald.service: Consumed 1.100s CPU time. May 14 00:01:02.555619 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:01:02.556941 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 00:01:02.562570 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 00:01:02.564398 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 00:01:02.566385 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:01:02.568713 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:01:02.571317 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 00:01:02.573614 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:01:02.574314 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:01:02.576518 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:01:02.576787 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:01:02.578455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:01:02.578721 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:01:02.580717 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:01:02.581001 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 00:01:02.582796 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:01:02.583090 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:01:02.584989 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:01:02.587098 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 00:01:02.589547 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 00:01:02.591575 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 00:01:02.612929 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 00:01:02.631466 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 00:01:02.638879 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 00:01:02.640348 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:01:02.640402 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:01:02.643069 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 00:01:02.649498 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 00:01:02.655094 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 00:01:02.661699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:01:02.675576 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 00:01:02.700766 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 00:01:02.703871 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:01:02.714645 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 00:01:02.717139 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:01:02.724288 systemd-journald[1133]: Time spent on flushing to /var/log/journal/9c0c5f25318243068a9271f8f8d17dc5 is 32.177ms for 1056 entries. May 14 00:01:02.724288 systemd-journald[1133]: System Journal (/var/log/journal/9c0c5f25318243068a9271f8f8d17dc5) is 8M, max 195.6M, 187.6M free. May 14 00:01:02.786647 systemd-journald[1133]: Received client request to flush runtime journal. May 14 00:01:02.786722 kernel: loop0: detected capacity change from 0 to 147912 May 14 00:01:02.721067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:01:02.728563 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 00:01:02.739586 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 00:01:02.745116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:01:02.751094 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 00:01:02.753706 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 00:01:02.761139 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 00:01:02.778755 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 00:01:02.785822 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 00:01:02.788520 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 00:01:02.791316 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 00:01:02.797236 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 00:01:02.799656 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:01:02.808488 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 00:01:02.910593 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 00:01:02.920426 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:01:02.922708 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:01:03.033222 kernel: loop1: detected capacity change from 0 to 138176 May 14 00:01:03.046555 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 14 00:01:03.046585 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 14 00:01:03.057014 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:01:03.174436 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:01:03.177345 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 00:01:03.188879 kernel: loop2: detected capacity change from 0 to 205544 May 14 00:01:03.276477 kernel: loop3: detected capacity change from 0 to 147912 May 14 00:01:03.346683 kernel: loop4: detected capacity change from 0 to 138176 May 14 00:01:03.424409 kernel: loop5: detected capacity change from 0 to 205544 May 14 00:01:03.467516 (sd-merge)[1206]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 00:01:03.468513 (sd-merge)[1206]: Merged extensions into '/usr'. May 14 00:01:03.491511 systemd[1]: Reload requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... May 14 00:01:03.491539 systemd[1]: Reloading... May 14 00:01:03.631554 zram_generator::config[1233]: No configuration found. May 14 00:01:03.863294 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:01:03.966617 systemd[1]: Reloading finished in 474 ms. May 14 00:01:04.006816 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 00:01:04.038596 systemd[1]: Starting ensure-sysext.service... May 14 00:01:04.049858 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:01:04.082398 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:01:04.082844 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 00:01:04.084251 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:01:04.084675 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. May 14 00:01:04.084773 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. May 14 00:01:04.088293 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 00:01:04.099393 ldconfig[1177]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:01:04.098973 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:01:04.098981 systemd-tmpfiles[1271]: Skipping /boot May 14 00:01:04.113818 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:01:04.117199 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 00:01:04.122198 systemd[1]: Reload requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... May 14 00:01:04.122211 systemd[1]: Reloading... May 14 00:01:04.124859 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:01:04.129357 systemd-tmpfiles[1271]: Skipping /boot May 14 00:01:04.165409 systemd-udevd[1274]: Using default interface naming scheme 'v255'. May 14 00:01:04.216398 zram_generator::config[1305]: No configuration found. May 14 00:01:04.333886 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1329) May 14 00:01:04.396406 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 14 00:01:04.400777 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:01:04.406311 kernel: ACPI: button: Power Button [PWRF] May 14 00:01:04.427424 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 14 00:01:04.446938 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 00:01:04.447196 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 14 00:01:04.447552 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 00:01:04.447764 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 14 00:01:04.578317 kernel: mousedev: PS/2 mouse device common for all mice May 14 00:01:04.591358 kernel: kvm_amd: TSC scaling supported May 14 00:01:04.591494 kernel: kvm_amd: Nested Virtualization enabled May 14 00:01:04.591512 kernel: kvm_amd: Nested Paging enabled May 14 00:01:04.591532 kernel: kvm_amd: LBR virtualization supported May 14 00:01:04.592036 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 14 00:01:04.592766 kernel: kvm_amd: Virtual GIF supported May 14 00:01:04.600549 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 00:01:04.603502 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 00:01:04.604351 systemd[1]: Reloading finished in 481 ms. May 14 00:01:04.620992 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:01:04.623097 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:01:04.633338 kernel: EDAC MC: Ver: 3.0.0 May 14 00:01:04.674385 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 00:01:04.681329 systemd[1]: Finished ensure-sysext.service. May 14 00:01:04.717018 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:01:04.738631 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:01:04.743465 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 00:01:04.745171 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:01:04.748322 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 00:01:04.755511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:01:04.761550 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:01:04.768804 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:01:04.773710 lvm[1378]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:01:04.778579 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:01:04.781349 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:01:04.786555 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 00:01:04.788380 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:01:04.793722 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 00:01:04.803454 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:01:04.808886 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:01:04.824387 augenrules[1407]: No rules May 14 00:01:04.824637 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 00:01:04.828098 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 00:01:04.831155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:01:04.833045 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:01:04.835115 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:01:04.835550 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:01:04.838699 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 00:01:04.843378 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:01:04.843691 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:01:04.845682 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:01:04.851706 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:01:04.855043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:01:04.855321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:01:04.857363 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:01:04.857610 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:01:04.859346 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 00:01:04.862002 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 00:01:04.879477 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:01:04.886748 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 00:01:04.887674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:01:04.887921 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:01:04.890490 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 00:01:04.895002 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:01:04.895220 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 00:01:04.899152 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 00:01:04.917151 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 00:01:04.921960 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 00:01:04.925553 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:01:04.929753 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 00:01:04.963064 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 00:01:04.982729 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:01:05.065350 systemd-networkd[1401]: lo: Link UP May 14 00:01:05.065376 systemd-networkd[1401]: lo: Gained carrier May 14 00:01:05.067557 systemd-networkd[1401]: Enumeration completed May 14 00:01:05.067683 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:01:05.068203 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:01:05.068213 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:01:05.069224 systemd-networkd[1401]: eth0: Link UP May 14 00:01:05.069229 systemd-networkd[1401]: eth0: Gained carrier May 14 00:01:05.069247 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:01:05.081104 systemd-resolved[1403]: Positive Trust Anchors: May 14 00:01:05.081126 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:01:05.081174 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:01:05.083602 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 00:01:05.087870 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 00:01:05.089498 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 00:01:05.089737 systemd-resolved[1403]: Defaulting to hostname 'linux'. May 14 00:01:05.091122 systemd[1]: Reached target time-set.target - System Time Set. May 14 00:01:05.092543 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:01:05.095317 systemd[1]: Reached target network.target - Network. May 14 00:01:05.096376 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:01:05.097854 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:01:05.099486 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:01:05.100168 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 00:01:05.101865 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. May 14 00:01:05.102106 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 00:01:05.104568 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 00:01:05.756989 systemd-resolved[1403]: Clock change detected. Flushing caches. May 14 00:01:05.757124 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 00:01:05.757189 systemd-timesyncd[1405]: Initial clock synchronization to Wed 2025-05-14 00:01:05.756932 UTC. May 14 00:01:05.760031 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 00:01:05.761906 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 00:01:05.763668 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:01:05.763715 systemd[1]: Reached target paths.target - Path Units. May 14 00:01:05.764817 systemd[1]: Reached target timers.target - Timer Units. May 14 00:01:05.769335 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 00:01:05.773199 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 00:01:05.778020 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 00:01:05.779848 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 00:01:05.781442 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 00:01:05.787690 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 00:01:05.789988 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 00:01:05.792965 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 00:01:05.794815 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 00:01:05.797455 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:01:05.798744 systemd[1]: Reached target basic.target - Basic System. May 14 00:01:05.800015 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 00:01:05.800062 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 00:01:05.801750 systemd[1]: Starting containerd.service - containerd container runtime... May 14 00:01:05.804608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 00:01:05.807222 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 00:01:05.810652 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 00:01:05.812350 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 00:01:05.814743 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 00:01:05.819143 jq[1450]: false May 14 00:01:05.819591 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 00:01:05.825685 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 00:01:05.833035 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 00:01:05.840238 extend-filesystems[1451]: Found loop3 May 14 00:01:05.840238 extend-filesystems[1451]: Found loop4 May 14 00:01:05.840238 extend-filesystems[1451]: Found loop5 May 14 00:01:05.839012 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 00:01:05.849781 extend-filesystems[1451]: Found sr0 May 14 00:01:05.849781 extend-filesystems[1451]: Found vda May 14 00:01:05.849781 extend-filesystems[1451]: Found vda1 May 14 00:01:05.849781 extend-filesystems[1451]: Found vda2 May 14 00:01:05.849781 extend-filesystems[1451]: Found vda3 May 14 00:01:05.849781 extend-filesystems[1451]: Found usr May 14 00:01:05.849781 extend-filesystems[1451]: Found vda4 May 14 00:01:05.849781 extend-filesystems[1451]: Found vda6 May 14 00:01:05.849781 extend-filesystems[1451]: Found vda7 May 14 00:01:05.849781 extend-filesystems[1451]: Found vda9 May 14 00:01:05.849781 extend-filesystems[1451]: Checking size of /dev/vda9 May 14 00:01:05.841452 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:01:05.861931 dbus-daemon[1449]: [system] SELinux support is enabled May 14 00:01:05.842240 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:01:05.854857 systemd[1]: Starting update-engine.service - Update Engine... May 14 00:01:05.859738 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 00:01:05.865883 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 00:01:05.872757 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:01:05.873099 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 00:01:05.873655 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:01:05.873994 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 00:01:05.878563 extend-filesystems[1451]: Resized partition /dev/vda9 May 14 00:01:05.882580 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) May 14 00:01:05.882211 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:01:05.886743 update_engine[1465]: I20250514 00:01:05.884814 1465 main.cc:92] Flatcar Update Engine starting May 14 00:01:05.882591 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 00:01:05.891085 update_engine[1465]: I20250514 00:01:05.889989 1465 update_check_scheduler.cc:74] Next update check in 5m39s May 14 00:01:05.891143 jq[1469]: true May 14 00:01:05.894849 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 00:01:05.900942 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:01:05.900975 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 00:01:05.902570 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:01:05.902594 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 00:01:05.905473 systemd[1]: Started update-engine.service - Update Engine. May 14 00:01:05.906546 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1344) May 14 00:01:05.908037 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 00:01:05.919748 jq[1477]: true May 14 00:01:05.926773 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 00:01:05.956896 tar[1473]: linux-amd64/helm May 14 00:01:05.973063 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:01:06.004152 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 00:01:06.013199 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 00:01:06.050185 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:01:06.050712 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 00:01:06.053789 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:01:06.059668 systemd-logind[1462]: Watching system buttons on /dev/input/event1 (Power Button) May 14 00:01:06.059703 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 00:01:06.060603 systemd-logind[1462]: New seat seat0. May 14 00:01:06.072621 systemd[1]: Started systemd-logind.service - User Login Management. May 14 00:01:06.086965 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 00:01:06.112975 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 00:01:06.126133 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 00:01:06.129347 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 00:01:06.139760 systemd[1]: Reached target getty.target - Login Prompts. May 14 00:01:06.193549 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 00:01:06.280850 extend-filesystems[1474]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 00:01:06.280850 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 00:01:06.280850 extend-filesystems[1474]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 00:01:06.285671 extend-filesystems[1451]: Resized filesystem in /dev/vda9 May 14 00:01:06.287273 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:01:06.287652 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 00:01:06.301875 bash[1511]: Updated "/home/core/.ssh/authorized_keys" May 14 00:01:06.304373 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 00:01:06.308971 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 00:01:06.385768 containerd[1478]: time="2025-05-14T00:01:06.385555503Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 14 00:01:06.415805 containerd[1478]: time="2025-05-14T00:01:06.415689317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 00:01:06.418814 containerd[1478]: time="2025-05-14T00:01:06.418756553Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 00:01:06.418814 containerd[1478]: time="2025-05-14T00:01:06.418801076Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 00:01:06.418928 containerd[1478]: time="2025-05-14T00:01:06.418824621Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 00:01:06.419054 containerd[1478]: time="2025-05-14T00:01:06.419027972Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 14 00:01:06.419054 containerd[1478]: time="2025-05-14T00:01:06.419047058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 14 00:01:06.419131 containerd[1478]: time="2025-05-14T00:01:06.419115406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:01:06.419131 containerd[1478]: time="2025-05-14T00:01:06.419129312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 00:01:06.419408 containerd[1478]: time="2025-05-14T00:01:06.419376917Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:01:06.419408 containerd[1478]: time="2025-05-14T00:01:06.419393678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 00:01:06.419408 containerd[1478]: time="2025-05-14T00:01:06.419406422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:01:06.419488 containerd[1478]: time="2025-05-14T00:01:06.419416712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 00:01:06.419562 containerd[1478]: time="2025-05-14T00:01:06.419543920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 00:01:06.419882 containerd[1478]: time="2025-05-14T00:01:06.419852259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 00:01:06.420090 containerd[1478]: time="2025-05-14T00:01:06.420051954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:01:06.420090 containerd[1478]: time="2025-05-14T00:01:06.420069627Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 00:01:06.420219 containerd[1478]: time="2025-05-14T00:01:06.420191756Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 00:01:06.420296 containerd[1478]: time="2025-05-14T00:01:06.420268740Z" level=info msg="metadata content store policy set" policy=shared May 14 00:01:06.436040 containerd[1478]: time="2025-05-14T00:01:06.435952079Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 00:01:06.436040 containerd[1478]: time="2025-05-14T00:01:06.436043000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 00:01:06.436040 containerd[1478]: time="2025-05-14T00:01:06.436065231Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 14 00:01:06.436373 containerd[1478]: time="2025-05-14T00:01:06.436085439Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 14 00:01:06.436373 containerd[1478]: time="2025-05-14T00:01:06.436103273Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 00:01:06.436373 containerd[1478]: time="2025-05-14T00:01:06.436334086Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 00:01:06.436808 containerd[1478]: time="2025-05-14T00:01:06.436766207Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.436919745Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.436949881Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.436967705Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.436985578Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.437000356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.437016797Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.437033929Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.437050620Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.437067181Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.437082500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.437096306Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.437120331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.437136802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 00:01:06.437620 containerd[1478]: time="2025-05-14T00:01:06.437152151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437174773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437215730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437233103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437248401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437264832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437281704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437301922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437322250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437339282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437356604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437377744Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437404905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437430563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438034 containerd[1478]: time="2025-05-14T00:01:06.437447194Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 00:01:06.438386 containerd[1478]: time="2025-05-14T00:01:06.437545489Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 00:01:06.438386 containerd[1478]: time="2025-05-14T00:01:06.437579813Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 14 00:01:06.438386 containerd[1478]: time="2025-05-14T00:01:06.437595793Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 00:01:06.438386 containerd[1478]: time="2025-05-14T00:01:06.437612955Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 14 00:01:06.438386 containerd[1478]: time="2025-05-14T00:01:06.437628214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438386 containerd[1478]: time="2025-05-14T00:01:06.437652910Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 14 00:01:06.438386 containerd[1478]: time="2025-05-14T00:01:06.437668339Z" level=info msg="NRI interface is disabled by configuration." May 14 00:01:06.438386 containerd[1478]: time="2025-05-14T00:01:06.437681654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 00:01:06.438629 containerd[1478]: time="2025-05-14T00:01:06.438020019Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 00:01:06.438629 containerd[1478]: time="2025-05-14T00:01:06.438084751Z" level=info msg="Connect containerd service" May 14 00:01:06.438629 containerd[1478]: time="2025-05-14T00:01:06.438133893Z" level=info msg="using legacy CRI server" May 14 00:01:06.438629 containerd[1478]: time="2025-05-14T00:01:06.438142499Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 00:01:06.438629 containerd[1478]: time="2025-05-14T00:01:06.438274827Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 00:01:06.439217 containerd[1478]: time="2025-05-14T00:01:06.439175237Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:01:06.439468 containerd[1478]: time="2025-05-14T00:01:06.439380452Z" level=info msg="Start subscribing containerd event" May 14 00:01:06.439468 containerd[1478]: time="2025-05-14T00:01:06.439478716Z" level=info msg="Start recovering state" May 14 00:01:06.439650 containerd[1478]: time="2025-05-14T00:01:06.439618559Z" level=info msg="Start event monitor" May 14 00:01:06.439650 containerd[1478]: time="2025-05-14T00:01:06.439645479Z" level=info msg="Start snapshots syncer" May 14 00:01:06.439729 containerd[1478]: time="2025-05-14T00:01:06.439661610Z" level=info msg="Start cni network conf syncer for default" May 14 00:01:06.439729 containerd[1478]: time="2025-05-14T00:01:06.439675746Z" level=info msg="Start streaming server" May 14 00:01:06.439890 containerd[1478]: time="2025-05-14T00:01:06.439847318Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:01:06.440291 containerd[1478]: time="2025-05-14T00:01:06.439916037Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:01:06.442113 containerd[1478]: time="2025-05-14T00:01:06.441264898Z" level=info msg="containerd successfully booted in 0.058121s" May 14 00:01:06.441316 systemd[1]: Started containerd.service - containerd container runtime. May 14 00:01:06.496772 tar[1473]: linux-amd64/LICENSE May 14 00:01:06.496887 tar[1473]: linux-amd64/README.md May 14 00:01:06.522977 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 00:01:06.927002 systemd-networkd[1401]: eth0: Gained IPv6LL May 14 00:01:06.930787 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 00:01:06.933156 systemd[1]: Reached target network-online.target - Network is Online. May 14 00:01:06.950022 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 00:01:06.953450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:06.956144 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 00:01:06.980422 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 00:01:06.980939 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 00:01:06.982756 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 00:01:06.985351 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 00:01:07.673986 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 00:01:07.687789 systemd[1]: Started sshd@0-10.0.0.111:22-10.0.0.1:53728.service - OpenSSH per-connection server daemon (10.0.0.1:53728). May 14 00:01:07.712753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:07.714495 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 00:01:07.715861 systemd[1]: Startup finished in 865ms (kernel) + 7.482s (initrd) + 5.878s (userspace) = 14.226s. May 14 00:01:07.718702 (kubelet)[1566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:01:07.763894 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 53728 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:07.766062 sshd-session[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:07.780272 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 00:01:07.780682 systemd-logind[1462]: New session 1 of user core. May 14 00:01:07.799063 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 00:01:07.822083 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 00:01:07.838050 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 00:01:07.841326 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:01:07.844438 systemd-logind[1462]: New session c1 of user core. May 14 00:01:08.019087 systemd[1576]: Queued start job for default target default.target. May 14 00:01:08.030310 systemd[1576]: Created slice app.slice - User Application Slice. May 14 00:01:08.030345 systemd[1576]: Reached target paths.target - Paths. May 14 00:01:08.030408 systemd[1576]: Reached target timers.target - Timers. May 14 00:01:08.032841 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 00:01:08.048624 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 00:01:08.048773 systemd[1576]: Reached target sockets.target - Sockets. May 14 00:01:08.048823 systemd[1576]: Reached target basic.target - Basic System. May 14 00:01:08.048864 systemd[1576]: Reached target default.target - Main User Target. May 14 00:01:08.048900 systemd[1576]: Startup finished in 196ms. May 14 00:01:08.049386 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 00:01:08.060963 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 00:01:08.122370 systemd[1]: Started sshd@1-10.0.0.111:22-10.0.0.1:53744.service - OpenSSH per-connection server daemon (10.0.0.1:53744). May 14 00:01:08.190545 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 53744 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:08.192737 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:08.199457 systemd-logind[1462]: New session 2 of user core. May 14 00:01:08.206775 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 00:01:08.265275 sshd[1592]: Connection closed by 10.0.0.1 port 53744 May 14 00:01:08.267647 sshd-session[1590]: pam_unix(sshd:session): session closed for user core May 14 00:01:08.276386 systemd[1]: sshd@1-10.0.0.111:22-10.0.0.1:53744.service: Deactivated successfully. May 14 00:01:08.278680 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:01:08.279622 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. May 14 00:01:08.288901 systemd[1]: Started sshd@2-10.0.0.111:22-10.0.0.1:53748.service - OpenSSH per-connection server daemon (10.0.0.1:53748). May 14 00:01:08.289997 systemd-logind[1462]: Removed session 2. May 14 00:01:08.327001 kubelet[1566]: E0514 00:01:08.326891 1566 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:01:08.330239 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 53748 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:08.331224 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:08.331524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:01:08.331737 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:01:08.332101 systemd[1]: kubelet.service: Consumed 1.224s CPU time, 237.9M memory peak. May 14 00:01:08.340084 systemd-logind[1462]: New session 3 of user core. May 14 00:01:08.353876 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 00:01:08.405726 sshd[1602]: Connection closed by 10.0.0.1 port 53748 May 14 00:01:08.406249 sshd-session[1597]: pam_unix(sshd:session): session closed for user core May 14 00:01:08.417838 systemd[1]: sshd@2-10.0.0.111:22-10.0.0.1:53748.service: Deactivated successfully. May 14 00:01:08.420152 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:01:08.421834 systemd-logind[1462]: Session 3 logged out. Waiting for processes to exit. May 14 00:01:08.430905 systemd[1]: Started sshd@3-10.0.0.111:22-10.0.0.1:53764.service - OpenSSH per-connection server daemon (10.0.0.1:53764). May 14 00:01:08.431918 systemd-logind[1462]: Removed session 3. May 14 00:01:08.473438 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 53764 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:08.475063 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:08.480433 systemd-logind[1462]: New session 4 of user core. May 14 00:01:08.495684 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 00:01:08.550010 sshd[1610]: Connection closed by 10.0.0.1 port 53764 May 14 00:01:08.550273 sshd-session[1607]: pam_unix(sshd:session): session closed for user core May 14 00:01:08.566093 systemd[1]: sshd@3-10.0.0.111:22-10.0.0.1:53764.service: Deactivated successfully. May 14 00:01:08.568433 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:01:08.570183 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. May 14 00:01:08.583873 systemd[1]: Started sshd@4-10.0.0.111:22-10.0.0.1:53768.service - OpenSSH per-connection server daemon (10.0.0.1:53768). May 14 00:01:08.585079 systemd-logind[1462]: Removed session 4. May 14 00:01:08.624000 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 53768 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:08.625866 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:08.631136 systemd-logind[1462]: New session 5 of user core. May 14 00:01:08.646755 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 00:01:08.707279 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 00:01:08.707732 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:01:08.728259 sudo[1619]: pam_unix(sudo:session): session closed for user root May 14 00:01:08.730324 sshd[1618]: Connection closed by 10.0.0.1 port 53768 May 14 00:01:08.730822 sshd-session[1615]: pam_unix(sshd:session): session closed for user core May 14 00:01:08.748806 systemd[1]: sshd@4-10.0.0.111:22-10.0.0.1:53768.service: Deactivated successfully. May 14 00:01:08.750633 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:01:08.752384 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. May 14 00:01:08.765057 systemd[1]: Started sshd@5-10.0.0.111:22-10.0.0.1:53784.service - OpenSSH per-connection server daemon (10.0.0.1:53784). May 14 00:01:08.766421 systemd-logind[1462]: Removed session 5. May 14 00:01:08.810601 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 53784 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:08.812303 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:08.817964 systemd-logind[1462]: New session 6 of user core. May 14 00:01:08.827705 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 00:01:08.886659 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 00:01:08.887089 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:01:08.891839 sudo[1629]: pam_unix(sudo:session): session closed for user root May 14 00:01:08.899630 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 00:01:08.900012 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:01:08.925066 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:01:08.964036 augenrules[1651]: No rules May 14 00:01:08.966240 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:01:08.966677 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:01:08.967951 sudo[1628]: pam_unix(sudo:session): session closed for user root May 14 00:01:08.969755 sshd[1627]: Connection closed by 10.0.0.1 port 53784 May 14 00:01:08.970102 sshd-session[1624]: pam_unix(sshd:session): session closed for user core May 14 00:01:08.983728 systemd[1]: sshd@5-10.0.0.111:22-10.0.0.1:53784.service: Deactivated successfully. May 14 00:01:08.986017 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:01:08.987871 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. May 14 00:01:08.996873 systemd[1]: Started sshd@6-10.0.0.111:22-10.0.0.1:53800.service - OpenSSH per-connection server daemon (10.0.0.1:53800). May 14 00:01:08.998010 systemd-logind[1462]: Removed session 6. May 14 00:01:09.037056 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 53800 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:09.038766 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:09.043688 systemd-logind[1462]: New session 7 of user core. May 14 00:01:09.059675 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 00:01:09.116061 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:01:09.116492 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:01:09.802010 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 00:01:09.802764 (dockerd)[1683]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 00:01:10.412381 dockerd[1683]: time="2025-05-14T00:01:10.412286115Z" level=info msg="Starting up" May 14 00:01:11.542900 dockerd[1683]: time="2025-05-14T00:01:11.542825200Z" level=info msg="Loading containers: start." May 14 00:01:11.909591 kernel: Initializing XFRM netlink socket May 14 00:01:12.017767 systemd-networkd[1401]: docker0: Link UP May 14 00:01:12.064652 dockerd[1683]: time="2025-05-14T00:01:12.064581679Z" level=info msg="Loading containers: done." May 14 00:01:12.094106 dockerd[1683]: time="2025-05-14T00:01:12.094042780Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:01:12.094279 dockerd[1683]: time="2025-05-14T00:01:12.094168206Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 14 00:01:12.094331 dockerd[1683]: time="2025-05-14T00:01:12.094313659Z" level=info msg="Daemon has completed initialization" May 14 00:01:12.139969 dockerd[1683]: time="2025-05-14T00:01:12.139883918Z" level=info msg="API listen on /run/docker.sock" May 14 00:01:12.140085 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 00:01:13.569938 containerd[1478]: time="2025-05-14T00:01:13.569889641Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 00:01:15.792393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704718909.mount: Deactivated successfully. May 14 00:01:18.434866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:01:18.452868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:18.618644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:18.625593 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:01:18.912734 kubelet[1917]: E0514 00:01:18.912578 1917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:01:18.920184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:01:18.920491 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:01:18.921017 systemd[1]: kubelet.service: Consumed 256ms CPU time, 96.8M memory peak. May 14 00:01:23.982539 containerd[1478]: time="2025-05-14T00:01:23.981092426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:23.986960 containerd[1478]: time="2025-05-14T00:01:23.986859418Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 14 00:01:23.988687 containerd[1478]: time="2025-05-14T00:01:23.988546964Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:23.993912 containerd[1478]: time="2025-05-14T00:01:23.993739468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:23.996531 containerd[1478]: time="2025-05-14T00:01:23.995721377Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 10.425781962s" May 14 00:01:23.996531 containerd[1478]: time="2025-05-14T00:01:23.995763556Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 14 00:01:24.004824 containerd[1478]: time="2025-05-14T00:01:24.004414078Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 00:01:28.005180 containerd[1478]: time="2025-05-14T00:01:28.005068068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:28.057465 containerd[1478]: time="2025-05-14T00:01:28.057087549Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 14 00:01:28.125214 containerd[1478]: time="2025-05-14T00:01:28.121462574Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:28.156814 containerd[1478]: time="2025-05-14T00:01:28.156687271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:28.158341 containerd[1478]: time="2025-05-14T00:01:28.158274349Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 4.153796752s" May 14 00:01:28.158341 containerd[1478]: time="2025-05-14T00:01:28.158335123Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 14 00:01:28.159130 containerd[1478]: time="2025-05-14T00:01:28.159077365Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 00:01:28.935527 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 00:01:28.946857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:29.120135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:29.125939 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:01:29.262071 kubelet[1961]: E0514 00:01:29.261747 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:01:29.266280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:01:29.266621 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:01:29.267172 systemd[1]: kubelet.service: Consumed 289ms CPU time, 96.1M memory peak. May 14 00:01:32.829016 containerd[1478]: time="2025-05-14T00:01:32.827565408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:33.009057 containerd[1478]: time="2025-05-14T00:01:33.008722427Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 14 00:01:33.094749 containerd[1478]: time="2025-05-14T00:01:33.094568110Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:33.168278 containerd[1478]: time="2025-05-14T00:01:33.168207198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:33.169464 containerd[1478]: time="2025-05-14T00:01:33.169399826Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 5.010270102s" May 14 00:01:33.169464 containerd[1478]: time="2025-05-14T00:01:33.169443648Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 14 00:01:33.170114 containerd[1478]: time="2025-05-14T00:01:33.170077828Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 00:01:36.027272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664623284.mount: Deactivated successfully. May 14 00:01:37.740581 containerd[1478]: time="2025-05-14T00:01:37.740480351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:37.745007 containerd[1478]: time="2025-05-14T00:01:37.744941863Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 14 00:01:37.747854 containerd[1478]: time="2025-05-14T00:01:37.747764459Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:37.755961 containerd[1478]: time="2025-05-14T00:01:37.755880248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:37.756737 containerd[1478]: time="2025-05-14T00:01:37.756657837Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 4.586518904s" May 14 00:01:37.756737 containerd[1478]: time="2025-05-14T00:01:37.756716076Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 14 00:01:37.757558 containerd[1478]: time="2025-05-14T00:01:37.757351288Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 00:01:38.620742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3226221666.mount: Deactivated successfully. May 14 00:01:39.434884 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 00:01:39.452802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:39.611979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:39.617954 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:01:39.658531 kubelet[2002]: E0514 00:01:39.658458 2002 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:01:39.662773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:01:39.662982 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:01:39.663363 systemd[1]: kubelet.service: Consumed 204ms CPU time, 95.6M memory peak. May 14 00:01:44.194665 containerd[1478]: time="2025-05-14T00:01:44.194572239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:44.273418 containerd[1478]: time="2025-05-14T00:01:44.273303978Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 00:01:44.293059 containerd[1478]: time="2025-05-14T00:01:44.293000910Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:44.361428 containerd[1478]: time="2025-05-14T00:01:44.361341641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:44.362418 containerd[1478]: time="2025-05-14T00:01:44.362385234Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 6.604999892s" May 14 00:01:44.362418 containerd[1478]: time="2025-05-14T00:01:44.362416072Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 00:01:44.363088 containerd[1478]: time="2025-05-14T00:01:44.363024361Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 00:01:45.411565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099216152.mount: Deactivated successfully. May 14 00:01:45.564396 containerd[1478]: time="2025-05-14T00:01:45.564285205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:45.590676 containerd[1478]: time="2025-05-14T00:01:45.590550649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 00:01:45.599762 containerd[1478]: time="2025-05-14T00:01:45.599704297Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:45.615020 containerd[1478]: time="2025-05-14T00:01:45.614936745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:45.615944 containerd[1478]: time="2025-05-14T00:01:45.615889008Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.252804303s" May 14 00:01:45.615944 containerd[1478]: time="2025-05-14T00:01:45.615941296Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 00:01:45.616632 containerd[1478]: time="2025-05-14T00:01:45.616524749Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 00:01:46.212261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115120301.mount: Deactivated successfully. May 14 00:01:49.685220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 00:01:49.695235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:49.946084 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:01:49.948642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:50.229358 kubelet[2088]: E0514 00:01:50.227824 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:01:50.246914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:01:50.247179 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:01:50.247763 systemd[1]: kubelet.service: Consumed 283ms CPU time, 97.9M memory peak. May 14 00:01:51.499426 update_engine[1465]: I20250514 00:01:51.499228 1465 update_attempter.cc:509] Updating boot flags... May 14 00:01:51.563015 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2125) May 14 00:01:51.705545 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2128) May 14 00:01:51.782542 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2128) May 14 00:01:54.008383 containerd[1478]: time="2025-05-14T00:01:54.008288036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:54.009524 containerd[1478]: time="2025-05-14T00:01:54.009464221Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 14 00:01:54.011716 containerd[1478]: time="2025-05-14T00:01:54.011651290Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:54.019445 containerd[1478]: time="2025-05-14T00:01:54.018920483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:54.020298 containerd[1478]: time="2025-05-14T00:01:54.020211212Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 8.403642271s" May 14 00:01:54.020358 containerd[1478]: time="2025-05-14T00:01:54.020297875Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 14 00:01:56.663627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:56.668398 systemd[1]: kubelet.service: Consumed 283ms CPU time, 97.9M memory peak. May 14 00:01:56.696735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:56.765778 systemd[1]: Reload requested from client PID 2164 ('systemctl') (unit session-7.scope)... May 14 00:01:56.765817 systemd[1]: Reloading... May 14 00:01:56.930617 zram_generator::config[2215]: No configuration found. May 14 00:01:57.644672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:01:57.807051 systemd[1]: Reloading finished in 1040 ms. May 14 00:01:57.908268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:57.914645 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:01:57.929898 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:57.932500 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:01:57.932866 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:57.932928 systemd[1]: kubelet.service: Consumed 189ms CPU time, 89.5M memory peak. May 14 00:01:57.948626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:01:58.164009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:01:58.164609 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:01:58.246684 kubelet[2264]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:01:58.252861 kubelet[2264]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:01:58.252861 kubelet[2264]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:01:58.252861 kubelet[2264]: I0514 00:01:58.247410 2264 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:02:00.623339 kubelet[2264]: I0514 00:02:00.623266 2264 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 00:02:00.623339 kubelet[2264]: I0514 00:02:00.623310 2264 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:02:00.623914 kubelet[2264]: I0514 00:02:00.623625 2264 server.go:929] "Client rotation is on, will bootstrap in background" May 14 00:02:00.648172 kubelet[2264]: I0514 00:02:00.648127 2264 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:02:00.656352 kubelet[2264]: E0514 00:02:00.656301 2264 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:00.708633 kubelet[2264]: E0514 00:02:00.708578 2264 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 00:02:00.708633 kubelet[2264]: I0514 00:02:00.708616 2264 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 00:02:00.721085 kubelet[2264]: I0514 00:02:00.721039 2264 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:02:00.728463 kubelet[2264]: I0514 00:02:00.728409 2264 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 00:02:00.728714 kubelet[2264]: I0514 00:02:00.728661 2264 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:02:00.728901 kubelet[2264]: I0514 00:02:00.728702 2264 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:02:00.729081 kubelet[2264]: I0514 00:02:00.728902 2264 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:02:00.729081 kubelet[2264]: I0514 00:02:00.728912 2264 container_manager_linux.go:300] "Creating device plugin manager" May 14 00:02:00.729081 kubelet[2264]: I0514 00:02:00.729055 2264 state_mem.go:36] "Initialized new in-memory state store" May 14 00:02:00.732273 kubelet[2264]: I0514 00:02:00.732212 2264 kubelet.go:408] "Attempting to sync node with API server" May 14 00:02:00.732273 kubelet[2264]: I0514 00:02:00.732250 2264 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:02:00.732385 kubelet[2264]: I0514 00:02:00.732364 2264 kubelet.go:314] "Adding apiserver pod source" May 14 00:02:00.732419 kubelet[2264]: I0514 00:02:00.732386 2264 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:02:00.738109 kubelet[2264]: W0514 00:02:00.737960 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 14 00:02:00.738109 kubelet[2264]: E0514 00:02:00.738070 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:00.739021 kubelet[2264]: W0514 00:02:00.738805 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 14 00:02:00.739021 kubelet[2264]: E0514 00:02:00.738899 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:00.740220 kubelet[2264]: I0514 00:02:00.739942 2264 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 00:02:00.742238 kubelet[2264]: I0514 00:02:00.742164 2264 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:02:00.742393 kubelet[2264]: W0514 00:02:00.742279 2264 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:02:00.743444 kubelet[2264]: I0514 00:02:00.743263 2264 server.go:1269] "Started kubelet" May 14 00:02:00.743658 kubelet[2264]: I0514 00:02:00.743439 2264 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:02:00.743694 kubelet[2264]: I0514 00:02:00.743625 2264 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:02:00.744090 kubelet[2264]: I0514 00:02:00.744056 2264 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:02:00.745207 kubelet[2264]: I0514 00:02:00.744852 2264 server.go:460] "Adding debug handlers to kubelet server" May 14 00:02:00.746574 kubelet[2264]: I0514 00:02:00.746287 2264 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:02:00.746574 kubelet[2264]: I0514 00:02:00.746531 2264 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:02:00.747385 kubelet[2264]: I0514 00:02:00.746777 2264 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 00:02:00.747385 kubelet[2264]: I0514 00:02:00.746936 2264 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 00:02:00.747385 kubelet[2264]: I0514 00:02:00.747007 2264 reconciler.go:26] "Reconciler: start to sync state" May 14 00:02:00.747555 kubelet[2264]: W0514 00:02:00.747483 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 14 00:02:00.747695 kubelet[2264]: E0514 00:02:00.747557 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:00.748047 kubelet[2264]: E0514 00:02:00.748019 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:02:00.748793 kubelet[2264]: E0514 00:02:00.748755 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="200ms" May 14 00:02:00.748878 kubelet[2264]: E0514 00:02:00.748851 2264 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:02:00.749070 kubelet[2264]: I0514 00:02:00.749048 2264 factory.go:221] Registration of the systemd container factory successfully May 14 00:02:00.749194 kubelet[2264]: I0514 00:02:00.749158 2264 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:02:00.750747 kubelet[2264]: I0514 00:02:00.750705 2264 factory.go:221] Registration of the containerd container factory successfully May 14 00:02:00.752261 kubelet[2264]: E0514 00:02:00.749798 2264 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3bca5b3d6e1b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:02:00.743226907 +0000 UTC m=+2.571925057,LastTimestamp:2025-05-14 00:02:00.743226907 +0000 UTC m=+2.571925057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:02:00.775147 kubelet[2264]: I0514 00:02:00.774726 2264 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:02:00.775147 kubelet[2264]: I0514 00:02:00.774754 2264 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:02:00.775147 kubelet[2264]: I0514 00:02:00.774797 2264 state_mem.go:36] "Initialized new in-memory state store" May 14 00:02:00.777537 kubelet[2264]: I0514 00:02:00.777072 2264 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:02:00.779862 kubelet[2264]: I0514 00:02:00.779775 2264 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:02:00.780016 kubelet[2264]: I0514 00:02:00.779914 2264 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:02:00.780016 kubelet[2264]: I0514 00:02:00.779953 2264 kubelet.go:2321] "Starting kubelet main sync loop" May 14 00:02:00.780080 kubelet[2264]: E0514 00:02:00.780014 2264 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:02:00.781774 kubelet[2264]: W0514 00:02:00.781302 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 14 00:02:00.781774 kubelet[2264]: E0514 00:02:00.781388 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:00.782642 kubelet[2264]: I0514 00:02:00.782621 2264 policy_none.go:49] "None policy: Start" May 14 00:02:00.784782 kubelet[2264]: I0514 00:02:00.784299 2264 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:02:00.784782 kubelet[2264]: I0514 00:02:00.784332 2264 state_mem.go:35] "Initializing new in-memory state store" May 14 00:02:00.797540 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 00:02:00.809862 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 00:02:00.813912 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 00:02:00.826568 kubelet[2264]: I0514 00:02:00.826350 2264 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:02:00.826787 kubelet[2264]: I0514 00:02:00.826724 2264 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:02:00.826787 kubelet[2264]: I0514 00:02:00.826751 2264 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:02:00.827152 kubelet[2264]: I0514 00:02:00.827115 2264 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:02:00.828788 kubelet[2264]: E0514 00:02:00.828741 2264 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 00:02:00.890342 systemd[1]: Created slice kubepods-burstable-pod8542e97385d416100bd9ddfd443f7b47.slice - libcontainer container kubepods-burstable-pod8542e97385d416100bd9ddfd443f7b47.slice. May 14 00:02:00.909225 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 14 00:02:00.924547 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 14 00:02:00.928453 kubelet[2264]: I0514 00:02:00.928423 2264 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:02:00.928923 kubelet[2264]: E0514 00:02:00.928856 2264 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 14 00:02:00.949646 kubelet[2264]: E0514 00:02:00.949587 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="400ms" May 14 00:02:01.049363 kubelet[2264]: I0514 00:02:01.049249 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:01.049363 kubelet[2264]: I0514 00:02:01.049322 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:01.049363 kubelet[2264]: I0514 00:02:01.049354 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8542e97385d416100bd9ddfd443f7b47-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8542e97385d416100bd9ddfd443f7b47\") " pod="kube-system/kube-apiserver-localhost" May 14 00:02:01.049363 kubelet[2264]: I0514 00:02:01.049383 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8542e97385d416100bd9ddfd443f7b47-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8542e97385d416100bd9ddfd443f7b47\") " pod="kube-system/kube-apiserver-localhost" May 14 00:02:01.049363 kubelet[2264]: I0514 00:02:01.049407 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:01.049840 kubelet[2264]: I0514 00:02:01.049427 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:01.049840 kubelet[2264]: I0514 00:02:01.049447 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:01.049840 kubelet[2264]: I0514 00:02:01.049468 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 00:02:01.049840 kubelet[2264]: I0514 00:02:01.049487 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8542e97385d416100bd9ddfd443f7b47-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8542e97385d416100bd9ddfd443f7b47\") " pod="kube-system/kube-apiserver-localhost" May 14 00:02:01.130997 kubelet[2264]: I0514 00:02:01.130764 2264 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:02:01.131344 kubelet[2264]: E0514 00:02:01.131272 2264 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 14 00:02:01.207070 kubelet[2264]: E0514 00:02:01.207018 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:01.207789 containerd[1478]: time="2025-05-14T00:02:01.207749264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8542e97385d416100bd9ddfd443f7b47,Namespace:kube-system,Attempt:0,}" May 14 00:02:01.222045 kubelet[2264]: E0514 00:02:01.222008 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:01.222616 containerd[1478]: time="2025-05-14T00:02:01.222572347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 14 00:02:01.227019 kubelet[2264]: E0514 00:02:01.226961 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:01.227557 containerd[1478]: time="2025-05-14T00:02:01.227497731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 14 00:02:01.350786 kubelet[2264]: E0514 00:02:01.350733 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="800ms" May 14 00:02:01.533306 kubelet[2264]: I0514 00:02:01.533182 2264 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:02:01.533750 kubelet[2264]: E0514 00:02:01.533685 2264 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 14 00:02:01.612195 kubelet[2264]: W0514 00:02:01.612082 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 14 00:02:01.612195 kubelet[2264]: E0514 00:02:01.612186 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:01.767155 kubelet[2264]: W0514 00:02:01.767050 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 14 00:02:01.767628 kubelet[2264]: E0514 00:02:01.767176 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:01.875099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3651725052.mount: Deactivated successfully. May 14 00:02:01.882572 kubelet[2264]: W0514 00:02:01.882460 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 14 00:02:01.882672 kubelet[2264]: E0514 00:02:01.882588 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:01.884781 containerd[1478]: time="2025-05-14T00:02:01.884723311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:02:01.888183 containerd[1478]: time="2025-05-14T00:02:01.888124689Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 14 00:02:01.890186 containerd[1478]: time="2025-05-14T00:02:01.890130259Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:02:01.892807 containerd[1478]: time="2025-05-14T00:02:01.892761653Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:02:01.893842 containerd[1478]: time="2025-05-14T00:02:01.893795352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 00:02:01.895072 containerd[1478]: time="2025-05-14T00:02:01.895032741Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:02:01.896190 containerd[1478]: time="2025-05-14T00:02:01.896135729Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 00:02:01.898421 containerd[1478]: time="2025-05-14T00:02:01.898387652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:02:01.900547 containerd[1478]: time="2025-05-14T00:02:01.900499802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 672.883268ms" May 14 00:02:01.901380 containerd[1478]: time="2025-05-14T00:02:01.901332984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 693.481297ms" May 14 00:02:01.902573 containerd[1478]: time="2025-05-14T00:02:01.902195180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 679.516505ms" May 14 00:02:01.970890 kubelet[2264]: W0514 00:02:01.970787 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 14 00:02:01.970890 kubelet[2264]: E0514 00:02:01.970847 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" May 14 00:02:02.133193 containerd[1478]: time="2025-05-14T00:02:02.131305127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:02:02.133193 containerd[1478]: time="2025-05-14T00:02:02.132765185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:02:02.133193 containerd[1478]: time="2025-05-14T00:02:02.132782297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:02.133193 containerd[1478]: time="2025-05-14T00:02:02.132903905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:02.134485 containerd[1478]: time="2025-05-14T00:02:02.134297388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:02:02.134485 containerd[1478]: time="2025-05-14T00:02:02.134389992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:02:02.134485 containerd[1478]: time="2025-05-14T00:02:02.134402295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:02.136589 containerd[1478]: time="2025-05-14T00:02:02.136068027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:02.141173 containerd[1478]: time="2025-05-14T00:02:02.140830568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:02:02.141173 containerd[1478]: time="2025-05-14T00:02:02.140904045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:02:02.141173 containerd[1478]: time="2025-05-14T00:02:02.140923071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:02.141173 containerd[1478]: time="2025-05-14T00:02:02.141054147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:02.151557 kubelet[2264]: E0514 00:02:02.151461 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="1.6s" May 14 00:02:02.165776 systemd[1]: Started cri-containerd-1babadd47239f2a423eca88e60fc8ae49e1b24adae3d51bd66801ea348086aa5.scope - libcontainer container 1babadd47239f2a423eca88e60fc8ae49e1b24adae3d51bd66801ea348086aa5. May 14 00:02:02.172394 systemd[1]: Started cri-containerd-3766817d41a51f1428b35408cb2bf3f93e1fa5ac8040ac01f418e91790d88529.scope - libcontainer container 3766817d41a51f1428b35408cb2bf3f93e1fa5ac8040ac01f418e91790d88529. May 14 00:02:02.175643 systemd[1]: Started cri-containerd-5f6ab7d9c39ef2333dc190cf8bcd891b5c5cf7afac4e75ea5e94bfd2beefc846.scope - libcontainer container 5f6ab7d9c39ef2333dc190cf8bcd891b5c5cf7afac4e75ea5e94bfd2beefc846. May 14 00:02:02.243542 containerd[1478]: time="2025-05-14T00:02:02.243476170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8542e97385d416100bd9ddfd443f7b47,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f6ab7d9c39ef2333dc190cf8bcd891b5c5cf7afac4e75ea5e94bfd2beefc846\"" May 14 00:02:02.244774 kubelet[2264]: E0514 00:02:02.244749 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:02.245207 containerd[1478]: time="2025-05-14T00:02:02.245182671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1babadd47239f2a423eca88e60fc8ae49e1b24adae3d51bd66801ea348086aa5\"" May 14 00:02:02.246444 kubelet[2264]: E0514 00:02:02.246408 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:02.249116 containerd[1478]: time="2025-05-14T00:02:02.249077503Z" level=info msg="CreateContainer within sandbox \"1babadd47239f2a423eca88e60fc8ae49e1b24adae3d51bd66801ea348086aa5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:02:02.249450 containerd[1478]: time="2025-05-14T00:02:02.249353862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3766817d41a51f1428b35408cb2bf3f93e1fa5ac8040ac01f418e91790d88529\"" May 14 00:02:02.250040 kubelet[2264]: E0514 00:02:02.249927 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:02.251011 containerd[1478]: time="2025-05-14T00:02:02.250857381Z" level=info msg="CreateContainer within sandbox \"5f6ab7d9c39ef2333dc190cf8bcd891b5c5cf7afac4e75ea5e94bfd2beefc846\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:02:02.253639 containerd[1478]: time="2025-05-14T00:02:02.253577651Z" level=info msg="CreateContainer within sandbox \"3766817d41a51f1428b35408cb2bf3f93e1fa5ac8040ac01f418e91790d88529\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:02:02.282488 containerd[1478]: time="2025-05-14T00:02:02.282418558Z" level=info msg="CreateContainer within sandbox \"1babadd47239f2a423eca88e60fc8ae49e1b24adae3d51bd66801ea348086aa5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"76052a9700f6d02e5a19c31dc5d166873e3143fedd1c1ee4e58b3daf3f3fed06\"" May 14 00:02:02.283452 containerd[1478]: time="2025-05-14T00:02:02.283392855Z" level=info msg="StartContainer for \"76052a9700f6d02e5a19c31dc5d166873e3143fedd1c1ee4e58b3daf3f3fed06\"" May 14 00:02:02.318692 systemd[1]: Started cri-containerd-76052a9700f6d02e5a19c31dc5d166873e3143fedd1c1ee4e58b3daf3f3fed06.scope - libcontainer container 76052a9700f6d02e5a19c31dc5d166873e3143fedd1c1ee4e58b3daf3f3fed06. May 14 00:02:02.335528 kubelet[2264]: I0514 00:02:02.335460 2264 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:02:02.336080 kubelet[2264]: E0514 00:02:02.336031 2264 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 14 00:02:02.490322 containerd[1478]: time="2025-05-14T00:02:02.490261518Z" level=info msg="StartContainer for \"76052a9700f6d02e5a19c31dc5d166873e3143fedd1c1ee4e58b3daf3f3fed06\" returns successfully" May 14 00:02:02.490487 containerd[1478]: time="2025-05-14T00:02:02.490330157Z" level=info msg="CreateContainer within sandbox \"5f6ab7d9c39ef2333dc190cf8bcd891b5c5cf7afac4e75ea5e94bfd2beefc846\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"641d3baecab0ca7a795c861a0dcd80dc8d55ece5b0dc00306e43cade6bfae75f\"" May 14 00:02:02.490487 containerd[1478]: time="2025-05-14T00:02:02.490268621Z" level=info msg="CreateContainer within sandbox \"3766817d41a51f1428b35408cb2bf3f93e1fa5ac8040ac01f418e91790d88529\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d5edc34cb8064fd29c989d4a8f522edc3ab63472441981b25450924aa9cfe946\"" May 14 00:02:02.491106 containerd[1478]: time="2025-05-14T00:02:02.490950871Z" level=info msg="StartContainer for \"641d3baecab0ca7a795c861a0dcd80dc8d55ece5b0dc00306e43cade6bfae75f\"" May 14 00:02:02.491106 containerd[1478]: time="2025-05-14T00:02:02.490969586Z" level=info msg="StartContainer for \"d5edc34cb8064fd29c989d4a8f522edc3ab63472441981b25450924aa9cfe946\"" May 14 00:02:02.544660 systemd[1]: Started cri-containerd-d5edc34cb8064fd29c989d4a8f522edc3ab63472441981b25450924aa9cfe946.scope - libcontainer container d5edc34cb8064fd29c989d4a8f522edc3ab63472441981b25450924aa9cfe946. May 14 00:02:02.547957 systemd[1]: Started cri-containerd-641d3baecab0ca7a795c861a0dcd80dc8d55ece5b0dc00306e43cade6bfae75f.scope - libcontainer container 641d3baecab0ca7a795c861a0dcd80dc8d55ece5b0dc00306e43cade6bfae75f. May 14 00:02:02.647828 containerd[1478]: time="2025-05-14T00:02:02.647738090Z" level=info msg="StartContainer for \"641d3baecab0ca7a795c861a0dcd80dc8d55ece5b0dc00306e43cade6bfae75f\" returns successfully" May 14 00:02:02.648020 containerd[1478]: time="2025-05-14T00:02:02.647893682Z" level=info msg="StartContainer for \"d5edc34cb8064fd29c989d4a8f522edc3ab63472441981b25450924aa9cfe946\" returns successfully" May 14 00:02:02.857003 kubelet[2264]: E0514 00:02:02.856792 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:02.859323 kubelet[2264]: E0514 00:02:02.859301 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:02.860691 kubelet[2264]: E0514 00:02:02.860672 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:03.773893 kubelet[2264]: E0514 00:02:03.773828 2264 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 00:02:03.862606 kubelet[2264]: E0514 00:02:03.862561 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:03.863076 kubelet[2264]: E0514 00:02:03.862867 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:03.937696 kubelet[2264]: I0514 00:02:03.937633 2264 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:02:03.946497 kubelet[2264]: I0514 00:02:03.946440 2264 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 00:02:03.946497 kubelet[2264]: E0514 00:02:03.946481 2264 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 00:02:04.741996 kubelet[2264]: I0514 00:02:04.740643 2264 apiserver.go:52] "Watching apiserver" May 14 00:02:04.748050 kubelet[2264]: I0514 00:02:04.748002 2264 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 00:02:07.058153 kubelet[2264]: E0514 00:02:07.058110 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:08.035934 kubelet[2264]: E0514 00:02:08.035882 2264 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:09.448000 systemd[1]: Reload requested from client PID 2547 ('systemctl') (unit session-7.scope)... May 14 00:02:09.448019 systemd[1]: Reloading... May 14 00:02:09.540058 zram_generator::config[2591]: No configuration found. May 14 00:02:09.689305 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:02:09.840116 systemd[1]: Reloading finished in 391 ms. May 14 00:02:09.881414 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:09.893949 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:02:09.894943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:09.895272 systemd[1]: kubelet.service: Consumed 1.274s CPU time, 119.3M memory peak. May 14 00:02:09.904389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:10.097303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:10.103201 (kubelet)[2636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:02:10.145305 kubelet[2636]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:02:10.145305 kubelet[2636]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:02:10.145305 kubelet[2636]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:02:10.145825 kubelet[2636]: I0514 00:02:10.145351 2636 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:02:10.158640 kubelet[2636]: I0514 00:02:10.158301 2636 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 00:02:10.158640 kubelet[2636]: I0514 00:02:10.158334 2636 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:02:10.158640 kubelet[2636]: I0514 00:02:10.158652 2636 server.go:929] "Client rotation is on, will bootstrap in background" May 14 00:02:10.160131 kubelet[2636]: I0514 00:02:10.160111 2636 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:02:10.162185 kubelet[2636]: I0514 00:02:10.162143 2636 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:02:10.165132 kubelet[2636]: E0514 00:02:10.165086 2636 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 00:02:10.165235 kubelet[2636]: I0514 00:02:10.165149 2636 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 00:02:10.172394 kubelet[2636]: I0514 00:02:10.172356 2636 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:02:10.172609 kubelet[2636]: I0514 00:02:10.172582 2636 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 00:02:10.173685 kubelet[2636]: I0514 00:02:10.172823 2636 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:02:10.173685 kubelet[2636]: I0514 00:02:10.172890 2636 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:02:10.173685 kubelet[2636]: I0514 00:02:10.173323 2636 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:02:10.173685 kubelet[2636]: I0514 00:02:10.173337 2636 container_manager_linux.go:300] "Creating device plugin manager" May 14 00:02:10.173905 kubelet[2636]: I0514 00:02:10.173414 2636 state_mem.go:36] "Initialized new in-memory state store" May 14 00:02:10.173905 kubelet[2636]: I0514 00:02:10.173627 2636 kubelet.go:408] "Attempting to sync node with API server" May 14 00:02:10.173905 kubelet[2636]: I0514 00:02:10.173643 2636 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:02:10.173905 kubelet[2636]: I0514 00:02:10.173691 2636 kubelet.go:314] "Adding apiserver pod source" May 14 00:02:10.173905 kubelet[2636]: I0514 00:02:10.173713 2636 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:02:10.178634 kubelet[2636]: I0514 00:02:10.178553 2636 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 00:02:10.179363 kubelet[2636]: I0514 00:02:10.179290 2636 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:02:10.180195 kubelet[2636]: I0514 00:02:10.180049 2636 server.go:1269] "Started kubelet" May 14 00:02:10.183546 kubelet[2636]: I0514 00:02:10.182949 2636 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:02:10.193620 kubelet[2636]: I0514 00:02:10.191604 2636 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:02:10.194285 kubelet[2636]: I0514 00:02:10.194264 2636 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 00:02:10.194785 kubelet[2636]: E0514 00:02:10.194571 2636 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:02:10.194785 kubelet[2636]: I0514 00:02:10.194587 2636 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:02:10.197547 kubelet[2636]: I0514 00:02:10.197491 2636 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 00:02:10.197770 kubelet[2636]: I0514 00:02:10.197674 2636 reconciler.go:26] "Reconciler: start to sync state" May 14 00:02:10.200405 kubelet[2636]: I0514 00:02:10.200377 2636 server.go:460] "Adding debug handlers to kubelet server" May 14 00:02:10.201284 kubelet[2636]: I0514 00:02:10.193731 2636 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:02:10.201378 kubelet[2636]: I0514 00:02:10.201370 2636 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:02:10.202993 kubelet[2636]: I0514 00:02:10.202974 2636 factory.go:221] Registration of the systemd container factory successfully May 14 00:02:10.204374 kubelet[2636]: E0514 00:02:10.204350 2636 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:02:10.204582 kubelet[2636]: I0514 00:02:10.204351 2636 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:02:10.206235 kubelet[2636]: I0514 00:02:10.206163 2636 factory.go:221] Registration of the containerd container factory successfully May 14 00:02:10.208805 kubelet[2636]: I0514 00:02:10.208760 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:02:10.210484 kubelet[2636]: I0514 00:02:10.210440 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:02:10.210598 kubelet[2636]: I0514 00:02:10.210494 2636 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:02:10.210598 kubelet[2636]: I0514 00:02:10.210536 2636 kubelet.go:2321] "Starting kubelet main sync loop" May 14 00:02:10.210668 kubelet[2636]: E0514 00:02:10.210589 2636 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:02:10.214195 sudo[2659]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 00:02:10.214697 sudo[2659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 00:02:10.249546 kubelet[2636]: I0514 00:02:10.249497 2636 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:02:10.249546 kubelet[2636]: I0514 00:02:10.249530 2636 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:02:10.249546 kubelet[2636]: I0514 00:02:10.249550 2636 state_mem.go:36] "Initialized new in-memory state store" May 14 00:02:10.249723 kubelet[2636]: I0514 00:02:10.249713 2636 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:02:10.249773 kubelet[2636]: I0514 00:02:10.249725 2636 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:02:10.249773 kubelet[2636]: I0514 00:02:10.249745 2636 policy_none.go:49] "None policy: Start" May 14 00:02:10.250305 kubelet[2636]: I0514 00:02:10.250289 2636 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:02:10.250347 kubelet[2636]: I0514 00:02:10.250317 2636 state_mem.go:35] "Initializing new in-memory state store" May 14 00:02:10.250460 kubelet[2636]: I0514 00:02:10.250447 2636 state_mem.go:75] "Updated machine memory state" May 14 00:02:10.254731 kubelet[2636]: I0514 00:02:10.254631 2636 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:02:10.254832 kubelet[2636]: I0514 00:02:10.254809 2636 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:02:10.254872 kubelet[2636]: I0514 00:02:10.254828 2636 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:02:10.255418 kubelet[2636]: I0514 00:02:10.255312 2636 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:02:10.322766 kubelet[2636]: E0514 00:02:10.322443 2636 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 00:02:10.360914 kubelet[2636]: I0514 00:02:10.360782 2636 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:02:10.370727 kubelet[2636]: I0514 00:02:10.370681 2636 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 14 00:02:10.370864 kubelet[2636]: I0514 00:02:10.370784 2636 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 00:02:10.398963 kubelet[2636]: I0514 00:02:10.398891 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:10.398963 kubelet[2636]: I0514 00:02:10.398946 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:10.399159 kubelet[2636]: I0514 00:02:10.398986 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:10.399159 kubelet[2636]: I0514 00:02:10.399009 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 00:02:10.399159 kubelet[2636]: I0514 00:02:10.399126 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8542e97385d416100bd9ddfd443f7b47-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8542e97385d416100bd9ddfd443f7b47\") " pod="kube-system/kube-apiserver-localhost" May 14 00:02:10.399250 kubelet[2636]: I0514 00:02:10.399179 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8542e97385d416100bd9ddfd443f7b47-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8542e97385d416100bd9ddfd443f7b47\") " pod="kube-system/kube-apiserver-localhost" May 14 00:02:10.399250 kubelet[2636]: I0514 00:02:10.399205 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8542e97385d416100bd9ddfd443f7b47-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8542e97385d416100bd9ddfd443f7b47\") " pod="kube-system/kube-apiserver-localhost" May 14 00:02:10.399250 kubelet[2636]: I0514 00:02:10.399233 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:10.399330 kubelet[2636]: I0514 00:02:10.399252 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:02:10.623473 kubelet[2636]: E0514 00:02:10.622866 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:10.623473 kubelet[2636]: E0514 00:02:10.623229 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:10.624244 kubelet[2636]: E0514 00:02:10.624220 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:10.737646 sudo[2659]: pam_unix(sudo:session): session closed for user root May 14 00:02:11.181744 kubelet[2636]: I0514 00:02:11.181705 2636 apiserver.go:52] "Watching apiserver" May 14 00:02:11.197688 kubelet[2636]: I0514 00:02:11.197594 2636 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 00:02:11.229407 kubelet[2636]: E0514 00:02:11.229081 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:11.229407 kubelet[2636]: E0514 00:02:11.229397 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:11.236856 kubelet[2636]: E0514 00:02:11.236807 2636 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 00:02:11.237053 kubelet[2636]: E0514 00:02:11.237025 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:11.269888 kubelet[2636]: I0514 00:02:11.269821 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.26978601 podStartE2EDuration="5.26978601s" podCreationTimestamp="2025-05-14 00:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:02:11.269677546 +0000 UTC m=+1.161396347" watchObservedRunningTime="2025-05-14 00:02:11.26978601 +0000 UTC m=+1.161504801" May 14 00:02:11.278345 kubelet[2636]: I0514 00:02:11.278012 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.278000787 podStartE2EDuration="1.278000787s" podCreationTimestamp="2025-05-14 00:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:02:11.277748935 +0000 UTC m=+1.169467726" watchObservedRunningTime="2025-05-14 00:02:11.278000787 +0000 UTC m=+1.169719578" May 14 00:02:11.291879 kubelet[2636]: I0514 00:02:11.291793 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.29177386 podStartE2EDuration="1.29177386s" podCreationTimestamp="2025-05-14 00:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:02:11.290613444 +0000 UTC m=+1.182332245" watchObservedRunningTime="2025-05-14 00:02:11.29177386 +0000 UTC m=+1.183492652" May 14 00:02:11.970908 sudo[1663]: pam_unix(sudo:session): session closed for user root May 14 00:02:11.973972 sshd[1662]: Connection closed by 10.0.0.1 port 53800 May 14 00:02:11.974975 sshd-session[1659]: pam_unix(sshd:session): session closed for user core May 14 00:02:11.980997 systemd[1]: sshd@6-10.0.0.111:22-10.0.0.1:53800.service: Deactivated successfully. May 14 00:02:11.984619 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:02:11.985004 systemd[1]: session-7.scope: Consumed 5.139s CPU time, 257.3M memory peak. May 14 00:02:11.987732 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. May 14 00:02:11.989159 systemd-logind[1462]: Removed session 7. May 14 00:02:12.230854 kubelet[2636]: E0514 00:02:12.230718 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:13.663403 kubelet[2636]: I0514 00:02:13.663356 2636 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:02:13.664037 kubelet[2636]: I0514 00:02:13.663969 2636 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:02:13.664080 containerd[1478]: time="2025-05-14T00:02:13.663756150Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:02:14.731667 systemd[1]: Created slice kubepods-besteffort-pod0b47f9a8_87f1_43eb_9126_3be3955d509d.slice - libcontainer container kubepods-besteffort-pod0b47f9a8_87f1_43eb_9126_3be3955d509d.slice. May 14 00:02:14.839866 kubelet[2636]: I0514 00:02:14.839802 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b47f9a8-87f1-43eb-9126-3be3955d509d-xtables-lock\") pod \"kube-proxy-jtskp\" (UID: \"0b47f9a8-87f1-43eb-9126-3be3955d509d\") " pod="kube-system/kube-proxy-jtskp" May 14 00:02:14.839866 kubelet[2636]: I0514 00:02:14.839842 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b47f9a8-87f1-43eb-9126-3be3955d509d-lib-modules\") pod \"kube-proxy-jtskp\" (UID: \"0b47f9a8-87f1-43eb-9126-3be3955d509d\") " pod="kube-system/kube-proxy-jtskp" May 14 00:02:14.839866 kubelet[2636]: I0514 00:02:14.839867 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8lnq\" (UniqueName: \"kubernetes.io/projected/0b47f9a8-87f1-43eb-9126-3be3955d509d-kube-api-access-p8lnq\") pod \"kube-proxy-jtskp\" (UID: \"0b47f9a8-87f1-43eb-9126-3be3955d509d\") " pod="kube-system/kube-proxy-jtskp" May 14 00:02:14.839866 kubelet[2636]: I0514 00:02:14.839885 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b47f9a8-87f1-43eb-9126-3be3955d509d-kube-proxy\") pod \"kube-proxy-jtskp\" (UID: \"0b47f9a8-87f1-43eb-9126-3be3955d509d\") " pod="kube-system/kube-proxy-jtskp" May 14 00:02:15.144403 systemd[1]: Created slice kubepods-burstable-pod16ae0a93_1f5c_4dfd_b70c_84df7a6e2ce2.slice - libcontainer container kubepods-burstable-pod16ae0a93_1f5c_4dfd_b70c_84df7a6e2ce2.slice. May 14 00:02:15.242170 kubelet[2636]: I0514 00:02:15.242133 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-clustermesh-secrets\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242170 kubelet[2636]: I0514 00:02:15.242165 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cni-path\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242170 kubelet[2636]: I0514 00:02:15.242180 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-lib-modules\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242370 kubelet[2636]: I0514 00:02:15.242195 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-hubble-tls\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242370 kubelet[2636]: I0514 00:02:15.242213 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-bpf-maps\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242370 kubelet[2636]: I0514 00:02:15.242225 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-host-proc-sys-net\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242370 kubelet[2636]: I0514 00:02:15.242265 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-host-proc-sys-kernel\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242370 kubelet[2636]: I0514 00:02:15.242304 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhzp8\" (UniqueName: \"kubernetes.io/projected/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-kube-api-access-jhzp8\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242370 kubelet[2636]: I0514 00:02:15.242328 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cilium-run\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242529 kubelet[2636]: I0514 00:02:15.242354 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-hostproc\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242529 kubelet[2636]: I0514 00:02:15.242368 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-etc-cni-netd\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242529 kubelet[2636]: I0514 00:02:15.242407 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cilium-cgroup\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242529 kubelet[2636]: I0514 00:02:15.242428 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-xtables-lock\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.242529 kubelet[2636]: I0514 00:02:15.242442 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cilium-config-path\") pod \"cilium-4f64w\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " pod="kube-system/cilium-4f64w" May 14 00:02:15.343030 kubelet[2636]: E0514 00:02:15.342997 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:15.348823 containerd[1478]: time="2025-05-14T00:02:15.348777599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jtskp,Uid:0b47f9a8-87f1-43eb-9126-3be3955d509d,Namespace:kube-system,Attempt:0,}" May 14 00:02:15.812770 systemd[1]: Created slice kubepods-besteffort-pod602c567d_66cd_4137_8238_a3ae609b794e.slice - libcontainer container kubepods-besteffort-pod602c567d_66cd_4137_8238_a3ae609b794e.slice. May 14 00:02:15.946812 kubelet[2636]: I0514 00:02:15.946768 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/602c567d-66cd-4137-8238-a3ae609b794e-cilium-config-path\") pod \"cilium-operator-5d85765b45-n2596\" (UID: \"602c567d-66cd-4137-8238-a3ae609b794e\") " pod="kube-system/cilium-operator-5d85765b45-n2596" May 14 00:02:15.946812 kubelet[2636]: I0514 00:02:15.946811 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk9pb\" (UniqueName: \"kubernetes.io/projected/602c567d-66cd-4137-8238-a3ae609b794e-kube-api-access-vk9pb\") pod \"cilium-operator-5d85765b45-n2596\" (UID: \"602c567d-66cd-4137-8238-a3ae609b794e\") " pod="kube-system/cilium-operator-5d85765b45-n2596" May 14 00:02:16.055536 kubelet[2636]: E0514 00:02:16.052604 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:16.056259 containerd[1478]: time="2025-05-14T00:02:16.054205387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4f64w,Uid:16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2,Namespace:kube-system,Attempt:0,}" May 14 00:02:16.065986 containerd[1478]: time="2025-05-14T00:02:16.065580235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:02:16.065986 containerd[1478]: time="2025-05-14T00:02:16.065707614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:02:16.065986 containerd[1478]: time="2025-05-14T00:02:16.065728193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:16.067056 containerd[1478]: time="2025-05-14T00:02:16.066294686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:16.088678 systemd[1]: Started cri-containerd-3aa2be106311babb6639f13c13a37d98f6ad9c3e745b92eafa0e78ccde0f1949.scope - libcontainer container 3aa2be106311babb6639f13c13a37d98f6ad9c3e745b92eafa0e78ccde0f1949. May 14 00:02:16.115125 containerd[1478]: time="2025-05-14T00:02:16.114690669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jtskp,Uid:0b47f9a8-87f1-43eb-9126-3be3955d509d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aa2be106311babb6639f13c13a37d98f6ad9c3e745b92eafa0e78ccde0f1949\"" May 14 00:02:16.115676 kubelet[2636]: E0514 00:02:16.115594 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:16.117966 containerd[1478]: time="2025-05-14T00:02:16.117930947Z" level=info msg="CreateContainer within sandbox \"3aa2be106311babb6639f13c13a37d98f6ad9c3e745b92eafa0e78ccde0f1949\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:02:16.415965 kubelet[2636]: E0514 00:02:16.415789 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:16.416488 containerd[1478]: time="2025-05-14T00:02:16.416434999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n2596,Uid:602c567d-66cd-4137-8238-a3ae609b794e,Namespace:kube-system,Attempt:0,}" May 14 00:02:16.684607 containerd[1478]: time="2025-05-14T00:02:16.683354902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:02:16.684607 containerd[1478]: time="2025-05-14T00:02:16.684262155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:02:16.684607 containerd[1478]: time="2025-05-14T00:02:16.684283385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:16.684607 containerd[1478]: time="2025-05-14T00:02:16.684456409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:16.748789 systemd[1]: Started cri-containerd-e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026.scope - libcontainer container e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026. May 14 00:02:16.761399 containerd[1478]: time="2025-05-14T00:02:16.761345963Z" level=info msg="CreateContainer within sandbox \"3aa2be106311babb6639f13c13a37d98f6ad9c3e745b92eafa0e78ccde0f1949\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dd44bdd6a0bc04c76bb11954904da2994fe63f6476b365250a00a725d066a1d4\"" May 14 00:02:16.762892 containerd[1478]: time="2025-05-14T00:02:16.762304711Z" level=info msg="StartContainer for \"dd44bdd6a0bc04c76bb11954904da2994fe63f6476b365250a00a725d066a1d4\"" May 14 00:02:16.778993 containerd[1478]: time="2025-05-14T00:02:16.778846722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:02:16.778993 containerd[1478]: time="2025-05-14T00:02:16.778913067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:02:16.778993 containerd[1478]: time="2025-05-14T00:02:16.778931802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:16.779220 containerd[1478]: time="2025-05-14T00:02:16.779020077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:16.783880 containerd[1478]: time="2025-05-14T00:02:16.783816865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4f64w,Uid:16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\"" May 14 00:02:16.784987 kubelet[2636]: E0514 00:02:16.784958 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:16.786475 containerd[1478]: time="2025-05-14T00:02:16.786365617Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 00:02:16.800818 systemd[1]: Started cri-containerd-dd44bdd6a0bc04c76bb11954904da2994fe63f6476b365250a00a725d066a1d4.scope - libcontainer container dd44bdd6a0bc04c76bb11954904da2994fe63f6476b365250a00a725d066a1d4. May 14 00:02:16.810719 systemd[1]: Started cri-containerd-79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c.scope - libcontainer container 79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c. May 14 00:02:16.846950 containerd[1478]: time="2025-05-14T00:02:16.846887717Z" level=info msg="StartContainer for \"dd44bdd6a0bc04c76bb11954904da2994fe63f6476b365250a00a725d066a1d4\" returns successfully" May 14 00:02:16.865721 containerd[1478]: time="2025-05-14T00:02:16.865663047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n2596,Uid:602c567d-66cd-4137-8238-a3ae609b794e,Namespace:kube-system,Attempt:0,} returns sandbox id \"79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c\"" May 14 00:02:16.866432 kubelet[2636]: E0514 00:02:16.866393 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:17.092168 kubelet[2636]: E0514 00:02:17.091611 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:17.247024 kubelet[2636]: E0514 00:02:17.246969 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:17.250986 kubelet[2636]: E0514 00:02:17.250936 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:17.261209 kubelet[2636]: I0514 00:02:17.260906 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jtskp" podStartSLOduration=3.260846875 podStartE2EDuration="3.260846875s" podCreationTimestamp="2025-05-14 00:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:02:17.26084958 +0000 UTC m=+7.152568391" watchObservedRunningTime="2025-05-14 00:02:17.260846875 +0000 UTC m=+7.152565666" May 14 00:02:18.252639 kubelet[2636]: E0514 00:02:18.252609 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:19.052915 kubelet[2636]: E0514 00:02:19.052817 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:19.254237 kubelet[2636]: E0514 00:02:19.254199 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:19.591504 kubelet[2636]: E0514 00:02:19.591469 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:20.256377 kubelet[2636]: E0514 00:02:20.256337 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:26.471618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2234906214.mount: Deactivated successfully. May 14 00:02:29.518935 containerd[1478]: time="2025-05-14T00:02:29.518851607Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:29.520321 containerd[1478]: time="2025-05-14T00:02:29.520267564Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 14 00:02:29.522010 containerd[1478]: time="2025-05-14T00:02:29.521929983Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:29.523848 containerd[1478]: time="2025-05-14T00:02:29.523803908Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.737352872s" May 14 00:02:29.523848 containerd[1478]: time="2025-05-14T00:02:29.523836890Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 14 00:02:29.528096 containerd[1478]: time="2025-05-14T00:02:29.528045176Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 00:02:29.529639 containerd[1478]: time="2025-05-14T00:02:29.529603540Z" level=info msg="CreateContainer within sandbox \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:02:29.546464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274399414.mount: Deactivated successfully. May 14 00:02:29.548469 containerd[1478]: time="2025-05-14T00:02:29.548417247Z" level=info msg="CreateContainer within sandbox \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991\"" May 14 00:02:29.549050 containerd[1478]: time="2025-05-14T00:02:29.548994030Z" level=info msg="StartContainer for \"c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991\"" May 14 00:02:29.580667 systemd[1]: Started cri-containerd-c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991.scope - libcontainer container c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991. May 14 00:02:29.611078 containerd[1478]: time="2025-05-14T00:02:29.611030452Z" level=info msg="StartContainer for \"c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991\" returns successfully" May 14 00:02:29.625595 systemd[1]: cri-containerd-c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991.scope: Deactivated successfully. May 14 00:02:30.179803 containerd[1478]: time="2025-05-14T00:02:30.179732272Z" level=info msg="shim disconnected" id=c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991 namespace=k8s.io May 14 00:02:30.179803 containerd[1478]: time="2025-05-14T00:02:30.179789039Z" level=warning msg="cleaning up after shim disconnected" id=c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991 namespace=k8s.io May 14 00:02:30.179803 containerd[1478]: time="2025-05-14T00:02:30.179799148Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:02:30.284278 kubelet[2636]: E0514 00:02:30.284244 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:30.290322 containerd[1478]: time="2025-05-14T00:02:30.290267589Z" level=info msg="CreateContainer within sandbox \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:02:30.310101 containerd[1478]: time="2025-05-14T00:02:30.310036368Z" level=info msg="CreateContainer within sandbox \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c943f06a1f84b67ebadf7db4352924852da5a18732f2e78e0bcac7024342d695\"" May 14 00:02:30.310627 containerd[1478]: time="2025-05-14T00:02:30.310577243Z" level=info msg="StartContainer for \"c943f06a1f84b67ebadf7db4352924852da5a18732f2e78e0bcac7024342d695\"" May 14 00:02:30.339735 systemd[1]: Started cri-containerd-c943f06a1f84b67ebadf7db4352924852da5a18732f2e78e0bcac7024342d695.scope - libcontainer container c943f06a1f84b67ebadf7db4352924852da5a18732f2e78e0bcac7024342d695. May 14 00:02:30.368537 containerd[1478]: time="2025-05-14T00:02:30.368454903Z" level=info msg="StartContainer for \"c943f06a1f84b67ebadf7db4352924852da5a18732f2e78e0bcac7024342d695\" returns successfully" May 14 00:02:30.382802 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:02:30.383395 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:02:30.383736 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 00:02:30.389901 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:02:30.390189 systemd[1]: cri-containerd-c943f06a1f84b67ebadf7db4352924852da5a18732f2e78e0bcac7024342d695.scope: Deactivated successfully. May 14 00:02:30.418466 containerd[1478]: time="2025-05-14T00:02:30.418365385Z" level=info msg="shim disconnected" id=c943f06a1f84b67ebadf7db4352924852da5a18732f2e78e0bcac7024342d695 namespace=k8s.io May 14 00:02:30.418466 containerd[1478]: time="2025-05-14T00:02:30.418444794Z" level=warning msg="cleaning up after shim disconnected" id=c943f06a1f84b67ebadf7db4352924852da5a18732f2e78e0bcac7024342d695 namespace=k8s.io May 14 00:02:30.418466 containerd[1478]: time="2025-05-14T00:02:30.418456987Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:02:30.425563 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:02:30.550816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991-rootfs.mount: Deactivated successfully. May 14 00:02:31.276910 kubelet[2636]: E0514 00:02:31.276878 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:31.279027 containerd[1478]: time="2025-05-14T00:02:31.278985663Z" level=info msg="CreateContainer within sandbox \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:02:31.780093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4063562665.mount: Deactivated successfully. May 14 00:02:31.810535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763002220.mount: Deactivated successfully. May 14 00:02:31.902635 containerd[1478]: time="2025-05-14T00:02:31.902550593Z" level=info msg="CreateContainer within sandbox \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491\"" May 14 00:02:31.903335 containerd[1478]: time="2025-05-14T00:02:31.903300911Z" level=info msg="StartContainer for \"854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491\"" May 14 00:02:31.936694 systemd[1]: Started cri-containerd-854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491.scope - libcontainer container 854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491. May 14 00:02:31.973251 systemd[1]: cri-containerd-854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491.scope: Deactivated successfully. May 14 00:02:31.974205 containerd[1478]: time="2025-05-14T00:02:31.974170076Z" level=info msg="StartContainer for \"854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491\" returns successfully" May 14 00:02:32.101415 containerd[1478]: time="2025-05-14T00:02:32.101261458Z" level=info msg="shim disconnected" id=854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491 namespace=k8s.io May 14 00:02:32.101415 containerd[1478]: time="2025-05-14T00:02:32.101346187Z" level=warning msg="cleaning up after shim disconnected" id=854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491 namespace=k8s.io May 14 00:02:32.101415 containerd[1478]: time="2025-05-14T00:02:32.101360293Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:02:32.114584 containerd[1478]: time="2025-05-14T00:02:32.114501690Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:02:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 00:02:32.282956 kubelet[2636]: E0514 00:02:32.282895 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:32.285730 containerd[1478]: time="2025-05-14T00:02:32.285562849Z" level=info msg="CreateContainer within sandbox \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:02:32.704486 containerd[1478]: time="2025-05-14T00:02:32.704413628Z" level=info msg="CreateContainer within sandbox \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044\"" May 14 00:02:32.705099 containerd[1478]: time="2025-05-14T00:02:32.705070210Z" level=info msg="StartContainer for \"0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044\"" May 14 00:02:32.715549 containerd[1478]: time="2025-05-14T00:02:32.715452790Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:32.717567 containerd[1478]: time="2025-05-14T00:02:32.716811740Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 14 00:02:32.718604 containerd[1478]: time="2025-05-14T00:02:32.718570760Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:32.720230 containerd[1478]: time="2025-05-14T00:02:32.720184407Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.192091041s" May 14 00:02:32.720230 containerd[1478]: time="2025-05-14T00:02:32.720215395Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 14 00:02:32.723792 containerd[1478]: time="2025-05-14T00:02:32.723626206Z" level=info msg="CreateContainer within sandbox \"79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 00:02:32.735008 systemd[1]: Started cri-containerd-0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044.scope - libcontainer container 0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044. May 14 00:02:32.744558 containerd[1478]: time="2025-05-14T00:02:32.744494068Z" level=info msg="CreateContainer within sandbox \"79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\"" May 14 00:02:32.748573 containerd[1478]: time="2025-05-14T00:02:32.745062073Z" level=info msg="StartContainer for \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\"" May 14 00:02:32.769411 systemd[1]: cri-containerd-0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044.scope: Deactivated successfully. May 14 00:02:32.776926 systemd[1]: Started cri-containerd-862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893.scope - libcontainer container 862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893. May 14 00:02:32.779531 containerd[1478]: time="2025-05-14T00:02:32.777417761Z" level=info msg="StartContainer for \"0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044\" returns successfully" May 14 00:02:32.781589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491-rootfs.mount: Deactivated successfully. May 14 00:02:32.814789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044-rootfs.mount: Deactivated successfully. May 14 00:02:32.922527 containerd[1478]: time="2025-05-14T00:02:32.922451416Z" level=info msg="StartContainer for \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\" returns successfully" May 14 00:02:32.928254 containerd[1478]: time="2025-05-14T00:02:32.928159816Z" level=info msg="shim disconnected" id=0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044 namespace=k8s.io May 14 00:02:32.928254 containerd[1478]: time="2025-05-14T00:02:32.928249775Z" level=warning msg="cleaning up after shim disconnected" id=0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044 namespace=k8s.io May 14 00:02:32.928357 containerd[1478]: time="2025-05-14T00:02:32.928263120Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:02:33.300034 kubelet[2636]: E0514 00:02:33.299923 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:33.300034 kubelet[2636]: E0514 00:02:33.299925 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:33.301952 containerd[1478]: time="2025-05-14T00:02:33.301908778Z" level=info msg="CreateContainer within sandbox \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:02:33.531167 kubelet[2636]: I0514 00:02:33.531104 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-n2596" podStartSLOduration=2.6762467709999997 podStartE2EDuration="18.531082512s" podCreationTimestamp="2025-05-14 00:02:15 +0000 UTC" firstStartedPulling="2025-05-14 00:02:16.86713053 +0000 UTC m=+6.758849321" lastFinishedPulling="2025-05-14 00:02:32.721966271 +0000 UTC m=+22.613685062" observedRunningTime="2025-05-14 00:02:33.530689454 +0000 UTC m=+23.422408265" watchObservedRunningTime="2025-05-14 00:02:33.531082512 +0000 UTC m=+23.422801303" May 14 00:02:33.534877 containerd[1478]: time="2025-05-14T00:02:33.534756696Z" level=info msg="CreateContainer within sandbox \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002\"" May 14 00:02:33.535824 containerd[1478]: time="2025-05-14T00:02:33.535802447Z" level=info msg="StartContainer for \"f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002\"" May 14 00:02:33.595701 systemd[1]: Started cri-containerd-f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002.scope - libcontainer container f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002. May 14 00:02:33.695532 containerd[1478]: time="2025-05-14T00:02:33.695459683Z" level=info msg="StartContainer for \"f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002\" returns successfully" May 14 00:02:33.791436 systemd[1]: run-containerd-runc-k8s.io-f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002-runc.h4LbGS.mount: Deactivated successfully. May 14 00:02:33.977397 kubelet[2636]: I0514 00:02:33.977357 2636 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 00:02:34.318153 kubelet[2636]: E0514 00:02:34.318056 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:34.318573 kubelet[2636]: E0514 00:02:34.318301 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:34.331022 systemd[1]: Created slice kubepods-burstable-podcdc08fab_88b1_4758_89a4_ed85f6fb773c.slice - libcontainer container kubepods-burstable-podcdc08fab_88b1_4758_89a4_ed85f6fb773c.slice. May 14 00:02:34.392458 systemd[1]: Created slice kubepods-burstable-pode0c14258_6db3_4ef7_9e82_d959e4f48507.slice - libcontainer container kubepods-burstable-pode0c14258_6db3_4ef7_9e82_d959e4f48507.slice. May 14 00:02:34.415782 kubelet[2636]: I0514 00:02:34.415557 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4f64w" podStartSLOduration=7.673521205 podStartE2EDuration="20.415539569s" podCreationTimestamp="2025-05-14 00:02:14 +0000 UTC" firstStartedPulling="2025-05-14 00:02:16.785837416 +0000 UTC m=+6.677556207" lastFinishedPulling="2025-05-14 00:02:29.52785578 +0000 UTC m=+19.419574571" observedRunningTime="2025-05-14 00:02:34.413457933 +0000 UTC m=+24.305176734" watchObservedRunningTime="2025-05-14 00:02:34.415539569 +0000 UTC m=+24.307258360" May 14 00:02:34.479798 kubelet[2636]: I0514 00:02:34.479747 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdc08fab-88b1-4758-89a4-ed85f6fb773c-config-volume\") pod \"coredns-6f6b679f8f-ck2t5\" (UID: \"cdc08fab-88b1-4758-89a4-ed85f6fb773c\") " pod="kube-system/coredns-6f6b679f8f-ck2t5" May 14 00:02:34.479798 kubelet[2636]: I0514 00:02:34.479791 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c14258-6db3-4ef7-9e82-d959e4f48507-config-volume\") pod \"coredns-6f6b679f8f-cdwhx\" (UID: \"e0c14258-6db3-4ef7-9e82-d959e4f48507\") " pod="kube-system/coredns-6f6b679f8f-cdwhx" May 14 00:02:34.480025 kubelet[2636]: I0514 00:02:34.479821 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4ksb\" (UniqueName: \"kubernetes.io/projected/e0c14258-6db3-4ef7-9e82-d959e4f48507-kube-api-access-q4ksb\") pod \"coredns-6f6b679f8f-cdwhx\" (UID: \"e0c14258-6db3-4ef7-9e82-d959e4f48507\") " pod="kube-system/coredns-6f6b679f8f-cdwhx" May 14 00:02:34.480025 kubelet[2636]: I0514 00:02:34.479850 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsncw\" (UniqueName: \"kubernetes.io/projected/cdc08fab-88b1-4758-89a4-ed85f6fb773c-kube-api-access-zsncw\") pod \"coredns-6f6b679f8f-ck2t5\" (UID: \"cdc08fab-88b1-4758-89a4-ed85f6fb773c\") " pod="kube-system/coredns-6f6b679f8f-ck2t5" May 14 00:02:34.634111 kubelet[2636]: E0514 00:02:34.633980 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:34.644672 containerd[1478]: time="2025-05-14T00:02:34.644597429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ck2t5,Uid:cdc08fab-88b1-4758-89a4-ed85f6fb773c,Namespace:kube-system,Attempt:0,}" May 14 00:02:34.650198 systemd[1]: Started sshd@7-10.0.0.111:22-10.0.0.1:56368.service - OpenSSH per-connection server daemon (10.0.0.1:56368). May 14 00:02:34.695783 kubelet[2636]: E0514 00:02:34.695750 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:34.698185 containerd[1478]: time="2025-05-14T00:02:34.697006741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cdwhx,Uid:e0c14258-6db3-4ef7-9e82-d959e4f48507,Namespace:kube-system,Attempt:0,}" May 14 00:02:34.716158 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 56368 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:34.716036 sshd-session[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:34.723126 systemd-logind[1462]: New session 8 of user core. May 14 00:02:34.732826 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 00:02:34.897202 sshd[3475]: Connection closed by 10.0.0.1 port 56368 May 14 00:02:34.897501 sshd-session[3446]: pam_unix(sshd:session): session closed for user core May 14 00:02:34.903473 systemd[1]: sshd@7-10.0.0.111:22-10.0.0.1:56368.service: Deactivated successfully. May 14 00:02:34.906243 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:02:34.907091 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. May 14 00:02:34.908009 systemd-logind[1462]: Removed session 8. May 14 00:02:35.320490 kubelet[2636]: E0514 00:02:35.320438 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:36.150440 systemd-networkd[1401]: cilium_host: Link UP May 14 00:02:36.150698 systemd-networkd[1401]: cilium_net: Link UP May 14 00:02:36.150919 systemd-networkd[1401]: cilium_net: Gained carrier May 14 00:02:36.151133 systemd-networkd[1401]: cilium_host: Gained carrier May 14 00:02:36.274893 systemd-networkd[1401]: cilium_vxlan: Link UP May 14 00:02:36.274904 systemd-networkd[1401]: cilium_vxlan: Gained carrier May 14 00:02:36.321930 kubelet[2636]: E0514 00:02:36.321881 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:36.509553 kernel: NET: Registered PF_ALG protocol family May 14 00:02:36.781624 systemd-networkd[1401]: cilium_net: Gained IPv6LL May 14 00:02:37.101681 systemd-networkd[1401]: cilium_host: Gained IPv6LL May 14 00:02:37.227883 systemd-networkd[1401]: lxc_health: Link UP May 14 00:02:37.236102 systemd-networkd[1401]: lxc_health: Gained carrier May 14 00:02:37.717565 kernel: eth0: renamed from tmp8bfa6 May 14 00:02:37.722658 systemd-networkd[1401]: lxc222b86b02053: Link UP May 14 00:02:37.725866 systemd-networkd[1401]: lxc222b86b02053: Gained carrier May 14 00:02:37.757194 systemd-networkd[1401]: lxcadbc49fe98bb: Link UP May 14 00:02:37.768551 kernel: eth0: renamed from tmp5bd17 May 14 00:02:37.776295 systemd-networkd[1401]: lxcadbc49fe98bb: Gained carrier May 14 00:02:37.936804 systemd-networkd[1401]: cilium_vxlan: Gained IPv6LL May 14 00:02:38.055805 kubelet[2636]: E0514 00:02:38.055461 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:38.509733 systemd-networkd[1401]: lxc_health: Gained IPv6LL May 14 00:02:39.021773 systemd-networkd[1401]: lxc222b86b02053: Gained IPv6LL May 14 00:02:39.597874 systemd-networkd[1401]: lxcadbc49fe98bb: Gained IPv6LL May 14 00:02:39.806038 kubelet[2636]: I0514 00:02:39.805943 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:02:39.806581 kubelet[2636]: E0514 00:02:39.806492 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:39.911350 systemd[1]: Started sshd@8-10.0.0.111:22-10.0.0.1:35846.service - OpenSSH per-connection server daemon (10.0.0.1:35846). May 14 00:02:39.959683 sshd[3862]: Accepted publickey for core from 10.0.0.1 port 35846 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:39.961498 sshd-session[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:39.965974 systemd-logind[1462]: New session 9 of user core. May 14 00:02:39.976701 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 00:02:40.106960 sshd[3864]: Connection closed by 10.0.0.1 port 35846 May 14 00:02:40.107452 sshd-session[3862]: pam_unix(sshd:session): session closed for user core May 14 00:02:40.111853 systemd[1]: sshd@8-10.0.0.111:22-10.0.0.1:35846.service: Deactivated successfully. May 14 00:02:40.114304 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:02:40.115316 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. May 14 00:02:40.116480 systemd-logind[1462]: Removed session 9. May 14 00:02:40.328555 kubelet[2636]: E0514 00:02:40.328485 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:41.716448 containerd[1478]: time="2025-05-14T00:02:41.716278107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:02:41.716448 containerd[1478]: time="2025-05-14T00:02:41.716348830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:02:41.716448 containerd[1478]: time="2025-05-14T00:02:41.716368076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:41.717129 containerd[1478]: time="2025-05-14T00:02:41.716463245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:41.718905 containerd[1478]: time="2025-05-14T00:02:41.718678961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:02:41.718905 containerd[1478]: time="2025-05-14T00:02:41.718723776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:02:41.718905 containerd[1478]: time="2025-05-14T00:02:41.718737131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:41.720413 containerd[1478]: time="2025-05-14T00:02:41.718804577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:02:41.758957 systemd[1]: Started cri-containerd-5bd17048bd1cac9bc3a2c94dadda7e477d9a014d4633e24e37be293177d02dec.scope - libcontainer container 5bd17048bd1cac9bc3a2c94dadda7e477d9a014d4633e24e37be293177d02dec. May 14 00:02:41.761816 systemd[1]: Started cri-containerd-8bfa68f2d43d7fa009f8d227b4e5703e6a18bf9ff62ca7672f6ca2d2528d1f46.scope - libcontainer container 8bfa68f2d43d7fa009f8d227b4e5703e6a18bf9ff62ca7672f6ca2d2528d1f46. May 14 00:02:41.780095 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:02:41.789842 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:02:41.808794 containerd[1478]: time="2025-05-14T00:02:41.808732500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ck2t5,Uid:cdc08fab-88b1-4758-89a4-ed85f6fb773c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bfa68f2d43d7fa009f8d227b4e5703e6a18bf9ff62ca7672f6ca2d2528d1f46\"" May 14 00:02:41.811243 kubelet[2636]: E0514 00:02:41.810126 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:41.817322 containerd[1478]: time="2025-05-14T00:02:41.817189158Z" level=info msg="CreateContainer within sandbox \"8bfa68f2d43d7fa009f8d227b4e5703e6a18bf9ff62ca7672f6ca2d2528d1f46\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:02:41.820586 containerd[1478]: time="2025-05-14T00:02:41.820558741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cdwhx,Uid:e0c14258-6db3-4ef7-9e82-d959e4f48507,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bd17048bd1cac9bc3a2c94dadda7e477d9a014d4633e24e37be293177d02dec\"" May 14 00:02:41.821708 kubelet[2636]: E0514 00:02:41.821658 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:41.823848 containerd[1478]: time="2025-05-14T00:02:41.823813777Z" level=info msg="CreateContainer within sandbox \"5bd17048bd1cac9bc3a2c94dadda7e477d9a014d4633e24e37be293177d02dec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:02:41.844658 containerd[1478]: time="2025-05-14T00:02:41.844594649Z" level=info msg="CreateContainer within sandbox \"8bfa68f2d43d7fa009f8d227b4e5703e6a18bf9ff62ca7672f6ca2d2528d1f46\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8db180572310dcf765a642a2e82242190e2bac4d201f3f1988e09a3046a1e05d\"" May 14 00:02:41.845548 containerd[1478]: time="2025-05-14T00:02:41.845468108Z" level=info msg="StartContainer for \"8db180572310dcf765a642a2e82242190e2bac4d201f3f1988e09a3046a1e05d\"" May 14 00:02:41.846820 containerd[1478]: time="2025-05-14T00:02:41.846778627Z" level=info msg="CreateContainer within sandbox \"5bd17048bd1cac9bc3a2c94dadda7e477d9a014d4633e24e37be293177d02dec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c79bf16996330552e14a44d0e6725227b65c44fae10ab168e56b5901541d3586\"" May 14 00:02:41.847358 containerd[1478]: time="2025-05-14T00:02:41.847325013Z" level=info msg="StartContainer for \"c79bf16996330552e14a44d0e6725227b65c44fae10ab168e56b5901541d3586\"" May 14 00:02:41.876723 systemd[1]: Started cri-containerd-8db180572310dcf765a642a2e82242190e2bac4d201f3f1988e09a3046a1e05d.scope - libcontainer container 8db180572310dcf765a642a2e82242190e2bac4d201f3f1988e09a3046a1e05d. May 14 00:02:41.880747 systemd[1]: Started cri-containerd-c79bf16996330552e14a44d0e6725227b65c44fae10ab168e56b5901541d3586.scope - libcontainer container c79bf16996330552e14a44d0e6725227b65c44fae10ab168e56b5901541d3586. May 14 00:02:41.920436 containerd[1478]: time="2025-05-14T00:02:41.920388863Z" level=info msg="StartContainer for \"8db180572310dcf765a642a2e82242190e2bac4d201f3f1988e09a3046a1e05d\" returns successfully" May 14 00:02:41.920582 containerd[1478]: time="2025-05-14T00:02:41.920438165Z" level=info msg="StartContainer for \"c79bf16996330552e14a44d0e6725227b65c44fae10ab168e56b5901541d3586\" returns successfully" May 14 00:02:42.334773 kubelet[2636]: E0514 00:02:42.334730 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:42.337033 kubelet[2636]: E0514 00:02:42.336974 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:42.455722 kubelet[2636]: I0514 00:02:42.455636 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-cdwhx" podStartSLOduration=27.455615475 podStartE2EDuration="27.455615475s" podCreationTimestamp="2025-05-14 00:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:02:42.454934754 +0000 UTC m=+32.346653565" watchObservedRunningTime="2025-05-14 00:02:42.455615475 +0000 UTC m=+32.347334276" May 14 00:02:42.590019 kubelet[2636]: I0514 00:02:42.589672 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ck2t5" podStartSLOduration=27.58965308 podStartE2EDuration="27.58965308s" podCreationTimestamp="2025-05-14 00:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:02:42.548234228 +0000 UTC m=+32.439953019" watchObservedRunningTime="2025-05-14 00:02:42.58965308 +0000 UTC m=+32.481371871" May 14 00:02:43.339034 kubelet[2636]: E0514 00:02:43.338985 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:43.339496 kubelet[2636]: E0514 00:02:43.339123 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:44.340757 kubelet[2636]: E0514 00:02:44.340720 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:44.341221 kubelet[2636]: E0514 00:02:44.340724 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:45.122647 systemd[1]: Started sshd@9-10.0.0.111:22-10.0.0.1:35856.service - OpenSSH per-connection server daemon (10.0.0.1:35856). May 14 00:02:45.169627 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 35856 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:45.171540 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:45.175854 systemd-logind[1462]: New session 10 of user core. May 14 00:02:45.185658 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:02:45.306264 sshd[4053]: Connection closed by 10.0.0.1 port 35856 May 14 00:02:45.306660 sshd-session[4051]: pam_unix(sshd:session): session closed for user core May 14 00:02:45.311028 systemd[1]: sshd@9-10.0.0.111:22-10.0.0.1:35856.service: Deactivated successfully. May 14 00:02:45.313131 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:02:45.313910 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. May 14 00:02:45.314837 systemd-logind[1462]: Removed session 10. May 14 00:02:50.322650 systemd[1]: Started sshd@10-10.0.0.111:22-10.0.0.1:55036.service - OpenSSH per-connection server daemon (10.0.0.1:55036). May 14 00:02:50.365430 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 55036 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:50.367381 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:50.372606 systemd-logind[1462]: New session 11 of user core. May 14 00:02:50.379739 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:02:50.500364 sshd[4074]: Connection closed by 10.0.0.1 port 55036 May 14 00:02:50.500908 sshd-session[4070]: pam_unix(sshd:session): session closed for user core May 14 00:02:50.509326 systemd[1]: sshd@10-10.0.0.111:22-10.0.0.1:55036.service: Deactivated successfully. May 14 00:02:50.511297 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:02:50.512824 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. May 14 00:02:50.519089 systemd[1]: Started sshd@11-10.0.0.111:22-10.0.0.1:55048.service - OpenSSH per-connection server daemon (10.0.0.1:55048). May 14 00:02:50.520440 systemd-logind[1462]: Removed session 11. May 14 00:02:50.559707 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 55048 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:50.561740 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:50.567163 systemd-logind[1462]: New session 12 of user core. May 14 00:02:50.576783 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:02:50.736221 sshd[4091]: Connection closed by 10.0.0.1 port 55048 May 14 00:02:50.737636 sshd-session[4088]: pam_unix(sshd:session): session closed for user core May 14 00:02:50.753261 systemd[1]: sshd@11-10.0.0.111:22-10.0.0.1:55048.service: Deactivated successfully. May 14 00:02:50.756687 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:02:50.760891 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. May 14 00:02:50.775321 systemd[1]: Started sshd@12-10.0.0.111:22-10.0.0.1:55060.service - OpenSSH per-connection server daemon (10.0.0.1:55060). May 14 00:02:50.779661 systemd-logind[1462]: Removed session 12. May 14 00:02:50.817052 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 55060 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:50.819044 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:50.823762 systemd-logind[1462]: New session 13 of user core. May 14 00:02:50.835830 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:02:50.959920 sshd[4104]: Connection closed by 10.0.0.1 port 55060 May 14 00:02:50.960392 sshd-session[4101]: pam_unix(sshd:session): session closed for user core May 14 00:02:50.965283 systemd[1]: sshd@12-10.0.0.111:22-10.0.0.1:55060.service: Deactivated successfully. May 14 00:02:50.968361 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:02:50.969249 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. May 14 00:02:50.970733 systemd-logind[1462]: Removed session 13. May 14 00:02:55.973037 systemd[1]: Started sshd@13-10.0.0.111:22-10.0.0.1:55072.service - OpenSSH per-connection server daemon (10.0.0.1:55072). May 14 00:02:56.016066 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 55072 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:56.017844 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:56.023026 systemd-logind[1462]: New session 14 of user core. May 14 00:02:56.030809 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:02:56.151820 sshd[4120]: Connection closed by 10.0.0.1 port 55072 May 14 00:02:56.152248 sshd-session[4118]: pam_unix(sshd:session): session closed for user core May 14 00:02:56.157573 systemd[1]: sshd@13-10.0.0.111:22-10.0.0.1:55072.service: Deactivated successfully. May 14 00:02:56.159765 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:02:56.160667 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. May 14 00:02:56.161836 systemd-logind[1462]: Removed session 14. May 14 00:03:01.165252 systemd[1]: Started sshd@14-10.0.0.111:22-10.0.0.1:58656.service - OpenSSH per-connection server daemon (10.0.0.1:58656). May 14 00:03:01.207594 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 58656 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:01.209462 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:01.213947 systemd-logind[1462]: New session 15 of user core. May 14 00:03:01.223729 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:03:01.333625 sshd[4136]: Connection closed by 10.0.0.1 port 58656 May 14 00:03:01.334006 sshd-session[4134]: pam_unix(sshd:session): session closed for user core May 14 00:03:01.348113 systemd[1]: sshd@14-10.0.0.111:22-10.0.0.1:58656.service: Deactivated successfully. May 14 00:03:01.349863 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:03:01.351333 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. May 14 00:03:01.365755 systemd[1]: Started sshd@15-10.0.0.111:22-10.0.0.1:58664.service - OpenSSH per-connection server daemon (10.0.0.1:58664). May 14 00:03:01.366582 systemd-logind[1462]: Removed session 15. May 14 00:03:01.405094 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 58664 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:01.406808 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:01.411206 systemd-logind[1462]: New session 16 of user core. May 14 00:03:01.418672 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:03:01.709562 sshd[4151]: Connection closed by 10.0.0.1 port 58664 May 14 00:03:01.710227 sshd-session[4148]: pam_unix(sshd:session): session closed for user core May 14 00:03:01.724767 systemd[1]: sshd@15-10.0.0.111:22-10.0.0.1:58664.service: Deactivated successfully. May 14 00:03:01.726634 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:03:01.728116 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. May 14 00:03:01.734811 systemd[1]: Started sshd@16-10.0.0.111:22-10.0.0.1:58670.service - OpenSSH per-connection server daemon (10.0.0.1:58670). May 14 00:03:01.735961 systemd-logind[1462]: Removed session 16. May 14 00:03:01.782146 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 58670 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:01.784064 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:01.790161 systemd-logind[1462]: New session 17 of user core. May 14 00:03:01.799813 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:03:03.271060 sshd[4164]: Connection closed by 10.0.0.1 port 58670 May 14 00:03:03.273641 sshd-session[4161]: pam_unix(sshd:session): session closed for user core May 14 00:03:03.282606 systemd[1]: sshd@16-10.0.0.111:22-10.0.0.1:58670.service: Deactivated successfully. May 14 00:03:03.284921 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:03:03.287187 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. May 14 00:03:03.296358 systemd[1]: Started sshd@17-10.0.0.111:22-10.0.0.1:58674.service - OpenSSH per-connection server daemon (10.0.0.1:58674). May 14 00:03:03.300309 systemd-logind[1462]: Removed session 17. May 14 00:03:03.340236 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 58674 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:03.342052 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:03.347245 systemd-logind[1462]: New session 18 of user core. May 14 00:03:03.356657 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:03:03.742419 sshd[4188]: Connection closed by 10.0.0.1 port 58674 May 14 00:03:03.742935 sshd-session[4185]: pam_unix(sshd:session): session closed for user core May 14 00:03:03.751427 systemd[1]: sshd@17-10.0.0.111:22-10.0.0.1:58674.service: Deactivated successfully. May 14 00:03:03.753383 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:03:03.755090 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. May 14 00:03:03.760779 systemd[1]: Started sshd@18-10.0.0.111:22-10.0.0.1:58682.service - OpenSSH per-connection server daemon (10.0.0.1:58682). May 14 00:03:03.761853 systemd-logind[1462]: Removed session 18. May 14 00:03:03.804886 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 58682 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:03.806709 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:03.812056 systemd-logind[1462]: New session 19 of user core. May 14 00:03:03.825815 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:03:03.941990 sshd[4202]: Connection closed by 10.0.0.1 port 58682 May 14 00:03:03.942393 sshd-session[4199]: pam_unix(sshd:session): session closed for user core May 14 00:03:03.947069 systemd[1]: sshd@18-10.0.0.111:22-10.0.0.1:58682.service: Deactivated successfully. May 14 00:03:03.948953 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:03:03.949750 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. May 14 00:03:03.950676 systemd-logind[1462]: Removed session 19. May 14 00:03:08.991699 systemd[1]: Started sshd@19-10.0.0.111:22-10.0.0.1:35750.service - OpenSSH per-connection server daemon (10.0.0.1:35750). May 14 00:03:09.052496 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 35750 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:09.055690 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:09.073152 systemd-logind[1462]: New session 20 of user core. May 14 00:03:09.087184 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:03:09.310163 sshd[4217]: Connection closed by 10.0.0.1 port 35750 May 14 00:03:09.311820 sshd-session[4215]: pam_unix(sshd:session): session closed for user core May 14 00:03:09.316936 systemd[1]: sshd@19-10.0.0.111:22-10.0.0.1:35750.service: Deactivated successfully. May 14 00:03:09.322456 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:03:09.328846 systemd-logind[1462]: Session 20 logged out. Waiting for processes to exit. May 14 00:03:09.332814 systemd-logind[1462]: Removed session 20. May 14 00:03:14.322468 systemd[1]: Started sshd@20-10.0.0.111:22-10.0.0.1:35760.service - OpenSSH per-connection server daemon (10.0.0.1:35760). May 14 00:03:14.365140 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 35760 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:14.366713 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:14.370535 systemd-logind[1462]: New session 21 of user core. May 14 00:03:14.380710 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:03:14.487767 sshd[4237]: Connection closed by 10.0.0.1 port 35760 May 14 00:03:14.488135 sshd-session[4235]: pam_unix(sshd:session): session closed for user core May 14 00:03:14.491918 systemd[1]: sshd@20-10.0.0.111:22-10.0.0.1:35760.service: Deactivated successfully. May 14 00:03:14.493876 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:03:14.494629 systemd-logind[1462]: Session 21 logged out. Waiting for processes to exit. May 14 00:03:14.495576 systemd-logind[1462]: Removed session 21. May 14 00:03:19.505312 systemd[1]: Started sshd@21-10.0.0.111:22-10.0.0.1:55906.service - OpenSSH per-connection server daemon (10.0.0.1:55906). May 14 00:03:19.546964 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 55906 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:19.548801 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:19.553678 systemd-logind[1462]: New session 22 of user core. May 14 00:03:19.566655 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:03:19.681680 sshd[4254]: Connection closed by 10.0.0.1 port 55906 May 14 00:03:19.682048 sshd-session[4252]: pam_unix(sshd:session): session closed for user core May 14 00:03:19.685762 systemd[1]: sshd@21-10.0.0.111:22-10.0.0.1:55906.service: Deactivated successfully. May 14 00:03:19.687851 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:03:19.688571 systemd-logind[1462]: Session 22 logged out. Waiting for processes to exit. May 14 00:03:19.689496 systemd-logind[1462]: Removed session 22. May 14 00:03:24.694814 systemd[1]: Started sshd@22-10.0.0.111:22-10.0.0.1:55908.service - OpenSSH per-connection server daemon (10.0.0.1:55908). May 14 00:03:24.736341 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 55908 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:24.737812 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:24.741885 systemd-logind[1462]: New session 23 of user core. May 14 00:03:24.752633 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:03:24.855272 sshd[4269]: Connection closed by 10.0.0.1 port 55908 May 14 00:03:24.855645 sshd-session[4267]: pam_unix(sshd:session): session closed for user core May 14 00:03:24.867110 systemd[1]: sshd@22-10.0.0.111:22-10.0.0.1:55908.service: Deactivated successfully. May 14 00:03:24.868933 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:03:24.870284 systemd-logind[1462]: Session 23 logged out. Waiting for processes to exit. May 14 00:03:24.877754 systemd[1]: Started sshd@23-10.0.0.111:22-10.0.0.1:55922.service - OpenSSH per-connection server daemon (10.0.0.1:55922). May 14 00:03:24.878992 systemd-logind[1462]: Removed session 23. May 14 00:03:24.915243 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 55922 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:24.916585 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:24.920594 systemd-logind[1462]: New session 24 of user core. May 14 00:03:24.933645 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 00:03:26.260277 containerd[1478]: time="2025-05-14T00:03:26.260107742Z" level=info msg="StopContainer for \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\" with timeout 30 (s)" May 14 00:03:26.268575 containerd[1478]: time="2025-05-14T00:03:26.267544636Z" level=info msg="Stop container \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\" with signal terminated" May 14 00:03:26.282009 systemd[1]: cri-containerd-862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893.scope: Deactivated successfully. May 14 00:03:26.291754 containerd[1478]: time="2025-05-14T00:03:26.291695664Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:03:26.301827 containerd[1478]: time="2025-05-14T00:03:26.301779030Z" level=info msg="StopContainer for \"f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002\" with timeout 2 (s)" May 14 00:03:26.302332 containerd[1478]: time="2025-05-14T00:03:26.302284398Z" level=info msg="Stop container \"f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002\" with signal terminated" May 14 00:03:26.309019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893-rootfs.mount: Deactivated successfully. May 14 00:03:26.313841 systemd-networkd[1401]: lxc_health: Link DOWN May 14 00:03:26.313851 systemd-networkd[1401]: lxc_health: Lost carrier May 14 00:03:26.317723 containerd[1478]: time="2025-05-14T00:03:26.315856591Z" level=info msg="shim disconnected" id=862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893 namespace=k8s.io May 14 00:03:26.317723 containerd[1478]: time="2025-05-14T00:03:26.315910322Z" level=warning msg="cleaning up after shim disconnected" id=862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893 namespace=k8s.io May 14 00:03:26.317723 containerd[1478]: time="2025-05-14T00:03:26.315920051Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:03:26.331307 systemd[1]: cri-containerd-f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002.scope: Deactivated successfully. May 14 00:03:26.331745 systemd[1]: cri-containerd-f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002.scope: Consumed 7.606s CPU time, 125.4M memory peak, 224K read from disk, 13.3M written to disk. May 14 00:03:26.352822 containerd[1478]: time="2025-05-14T00:03:26.351294274Z" level=info msg="StopContainer for \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\" returns successfully" May 14 00:03:26.363261 containerd[1478]: time="2025-05-14T00:03:26.361589391Z" level=info msg="StopPodSandbox for \"79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c\"" May 14 00:03:26.367084 containerd[1478]: time="2025-05-14T00:03:26.361663391Z" level=info msg="Container to stop \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:03:26.370732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002-rootfs.mount: Deactivated successfully. May 14 00:03:26.370955 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c-shm.mount: Deactivated successfully. May 14 00:03:26.375618 containerd[1478]: time="2025-05-14T00:03:26.375470390Z" level=info msg="shim disconnected" id=f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002 namespace=k8s.io May 14 00:03:26.375618 containerd[1478]: time="2025-05-14T00:03:26.375609793Z" level=warning msg="cleaning up after shim disconnected" id=f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002 namespace=k8s.io May 14 00:03:26.375618 containerd[1478]: time="2025-05-14T00:03:26.375620935Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:03:26.378144 systemd[1]: cri-containerd-79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c.scope: Deactivated successfully. May 14 00:03:26.390922 containerd[1478]: time="2025-05-14T00:03:26.390846419Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:03:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 00:03:26.395285 containerd[1478]: time="2025-05-14T00:03:26.395241593Z" level=info msg="StopContainer for \"f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002\" returns successfully" May 14 00:03:26.395858 containerd[1478]: time="2025-05-14T00:03:26.395831520Z" level=info msg="StopPodSandbox for \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\"" May 14 00:03:26.395931 containerd[1478]: time="2025-05-14T00:03:26.395859063Z" level=info msg="Container to stop \"0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:03:26.395931 containerd[1478]: time="2025-05-14T00:03:26.395889460Z" level=info msg="Container to stop \"f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:03:26.395931 containerd[1478]: time="2025-05-14T00:03:26.395897575Z" level=info msg="Container to stop \"c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:03:26.395931 containerd[1478]: time="2025-05-14T00:03:26.395905259Z" level=info msg="Container to stop \"c943f06a1f84b67ebadf7db4352924852da5a18732f2e78e0bcac7024342d695\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:03:26.395931 containerd[1478]: time="2025-05-14T00:03:26.395913255Z" level=info msg="Container to stop \"854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:03:26.400276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026-shm.mount: Deactivated successfully. May 14 00:03:26.402140 systemd[1]: cri-containerd-e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026.scope: Deactivated successfully. May 14 00:03:26.417340 containerd[1478]: time="2025-05-14T00:03:26.417266635Z" level=info msg="shim disconnected" id=79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c namespace=k8s.io May 14 00:03:26.417340 containerd[1478]: time="2025-05-14T00:03:26.417330045Z" level=warning msg="cleaning up after shim disconnected" id=79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c namespace=k8s.io May 14 00:03:26.417340 containerd[1478]: time="2025-05-14T00:03:26.417340585Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:03:26.427457 containerd[1478]: time="2025-05-14T00:03:26.426586255Z" level=info msg="shim disconnected" id=e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026 namespace=k8s.io May 14 00:03:26.427457 containerd[1478]: time="2025-05-14T00:03:26.426642732Z" level=warning msg="cleaning up after shim disconnected" id=e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026 namespace=k8s.io May 14 00:03:26.427457 containerd[1478]: time="2025-05-14T00:03:26.426651148Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:03:26.439629 containerd[1478]: time="2025-05-14T00:03:26.439576596Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:03:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 00:03:26.441379 containerd[1478]: time="2025-05-14T00:03:26.441324647Z" level=info msg="TearDown network for sandbox \"79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c\" successfully" May 14 00:03:26.441379 containerd[1478]: time="2025-05-14T00:03:26.441359142Z" level=info msg="StopPodSandbox for \"79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c\" returns successfully" May 14 00:03:26.446210 containerd[1478]: time="2025-05-14T00:03:26.446176856Z" level=info msg="TearDown network for sandbox \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\" successfully" May 14 00:03:26.446210 containerd[1478]: time="2025-05-14T00:03:26.446204198Z" level=info msg="StopPodSandbox for \"e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026\" returns successfully" May 14 00:03:26.451338 kubelet[2636]: I0514 00:03:26.451312 2636 scope.go:117] "RemoveContainer" containerID="862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893" May 14 00:03:26.452678 containerd[1478]: time="2025-05-14T00:03:26.452615901Z" level=info msg="RemoveContainer for \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\"" May 14 00:03:26.456585 containerd[1478]: time="2025-05-14T00:03:26.456548428Z" level=info msg="RemoveContainer for \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\" returns successfully" May 14 00:03:26.457380 kubelet[2636]: I0514 00:03:26.456745 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cni-path\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457380 kubelet[2636]: I0514 00:03:26.456777 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-host-proc-sys-net\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457380 kubelet[2636]: I0514 00:03:26.456800 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhzp8\" (UniqueName: \"kubernetes.io/projected/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-kube-api-access-jhzp8\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457380 kubelet[2636]: I0514 00:03:26.456817 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-xtables-lock\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457380 kubelet[2636]: I0514 00:03:26.456831 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cilium-cgroup\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457380 kubelet[2636]: I0514 00:03:26.456846 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-etc-cni-netd\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457598 kubelet[2636]: I0514 00:03:26.456846 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cni-path" (OuterVolumeSpecName: "cni-path") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:03:26.457598 kubelet[2636]: I0514 00:03:26.456863 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cilium-config-path\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457598 kubelet[2636]: I0514 00:03:26.456879 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-lib-modules\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457598 kubelet[2636]: I0514 00:03:26.456892 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:03:26.457598 kubelet[2636]: I0514 00:03:26.456895 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-bpf-maps\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457754 kubelet[2636]: I0514 00:03:26.456928 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:03:26.457754 kubelet[2636]: I0514 00:03:26.456933 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-host-proc-sys-kernel\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457754 kubelet[2636]: I0514 00:03:26.456952 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:03:26.457754 kubelet[2636]: I0514 00:03:26.456954 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-clustermesh-secrets\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457754 kubelet[2636]: I0514 00:03:26.456980 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk9pb\" (UniqueName: \"kubernetes.io/projected/602c567d-66cd-4137-8238-a3ae609b794e-kube-api-access-vk9pb\") pod \"602c567d-66cd-4137-8238-a3ae609b794e\" (UID: \"602c567d-66cd-4137-8238-a3ae609b794e\") " May 14 00:03:26.457931 kubelet[2636]: I0514 00:03:26.456994 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cilium-run\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457931 kubelet[2636]: I0514 00:03:26.457009 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-hubble-tls\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457931 kubelet[2636]: I0514 00:03:26.457024 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-hostproc\") pod \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\" (UID: \"16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2\") " May 14 00:03:26.457931 kubelet[2636]: I0514 00:03:26.457040 2636 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/602c567d-66cd-4137-8238-a3ae609b794e-cilium-config-path\") pod \"602c567d-66cd-4137-8238-a3ae609b794e\" (UID: \"602c567d-66cd-4137-8238-a3ae609b794e\") " May 14 00:03:26.457931 kubelet[2636]: I0514 00:03:26.457067 2636 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.457931 kubelet[2636]: I0514 00:03:26.457076 2636 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.457931 kubelet[2636]: I0514 00:03:26.457085 2636 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.458127 kubelet[2636]: I0514 00:03:26.457092 2636 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.458238 kubelet[2636]: I0514 00:03:26.458211 2636 scope.go:117] "RemoveContainer" containerID="862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893" May 14 00:03:26.458675 containerd[1478]: time="2025-05-14T00:03:26.458633958Z" level=error msg="ContainerStatus for \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\": not found" May 14 00:03:26.461248 kubelet[2636]: I0514 00:03:26.461099 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:03:26.462567 kubelet[2636]: I0514 00:03:26.462538 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:03:26.462851 kubelet[2636]: I0514 00:03:26.462637 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:03:26.462851 kubelet[2636]: I0514 00:03:26.462708 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:03:26.462851 kubelet[2636]: I0514 00:03:26.462727 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:03:26.462851 kubelet[2636]: I0514 00:03:26.462745 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-hostproc" (OuterVolumeSpecName: "hostproc") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:03:26.462851 kubelet[2636]: I0514 00:03:26.462816 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602c567d-66cd-4137-8238-a3ae609b794e-kube-api-access-vk9pb" (OuterVolumeSpecName: "kube-api-access-vk9pb") pod "602c567d-66cd-4137-8238-a3ae609b794e" (UID: "602c567d-66cd-4137-8238-a3ae609b794e"). InnerVolumeSpecName "kube-api-access-vk9pb". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:03:26.465914 kubelet[2636]: I0514 00:03:26.465892 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/602c567d-66cd-4137-8238-a3ae609b794e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "602c567d-66cd-4137-8238-a3ae609b794e" (UID: "602c567d-66cd-4137-8238-a3ae609b794e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:03:26.466310 kubelet[2636]: I0514 00:03:26.466282 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:03:26.467374 kubelet[2636]: E0514 00:03:26.467347 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\": not found" containerID="862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893" May 14 00:03:26.467527 kubelet[2636]: I0514 00:03:26.467492 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:03:26.467831 kubelet[2636]: I0514 00:03:26.467458 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893"} err="failed to get container status \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\": rpc error: code = NotFound desc = an error occurred when try to find container \"862503dcc7826b762fa62eef2268025f5ca0a16ef4b39a2d4f67e14834642893\": not found" May 14 00:03:26.468543 kubelet[2636]: I0514 00:03:26.468494 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:03:26.469106 kubelet[2636]: I0514 00:03:26.469065 2636 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-kube-api-access-jhzp8" (OuterVolumeSpecName: "kube-api-access-jhzp8") pod "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" (UID: "16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2"). InnerVolumeSpecName "kube-api-access-jhzp8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:03:26.558401 kubelet[2636]: I0514 00:03:26.558257 2636 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.558401 kubelet[2636]: I0514 00:03:26.558285 2636 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.558401 kubelet[2636]: I0514 00:03:26.558296 2636 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.558401 kubelet[2636]: I0514 00:03:26.558305 2636 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.558401 kubelet[2636]: I0514 00:03:26.558313 2636 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.558401 kubelet[2636]: I0514 00:03:26.558322 2636 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vk9pb\" (UniqueName: \"kubernetes.io/projected/602c567d-66cd-4137-8238-a3ae609b794e-kube-api-access-vk9pb\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.558401 kubelet[2636]: I0514 00:03:26.558330 2636 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.558401 kubelet[2636]: I0514 00:03:26.558339 2636 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.558883 kubelet[2636]: I0514 00:03:26.558349 2636 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.558883 kubelet[2636]: I0514 00:03:26.558356 2636 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/602c567d-66cd-4137-8238-a3ae609b794e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.558883 kubelet[2636]: I0514 00:03:26.558365 2636 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.558883 kubelet[2636]: I0514 00:03:26.558373 2636 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jhzp8\" (UniqueName: \"kubernetes.io/projected/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2-kube-api-access-jhzp8\") on node \"localhost\" DevicePath \"\"" May 14 00:03:26.758042 systemd[1]: Removed slice kubepods-besteffort-pod602c567d_66cd_4137_8238_a3ae609b794e.slice - libcontainer container kubepods-besteffort-pod602c567d_66cd_4137_8238_a3ae609b794e.slice. May 14 00:03:27.211536 kubelet[2636]: E0514 00:03:27.211453 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:27.275571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79b1dcc23b664090152965e31c720137c317801c35af116419be4012293f2a9c-rootfs.mount: Deactivated successfully. May 14 00:03:27.275707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2486926e16c86684723da46fac7b7a755eb383283b643e92f91ade94f63e026-rootfs.mount: Deactivated successfully. May 14 00:03:27.275795 systemd[1]: var-lib-kubelet-pods-602c567d\x2d66cd\x2d4137\x2d8238\x2da3ae609b794e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvk9pb.mount: Deactivated successfully. May 14 00:03:27.275880 systemd[1]: var-lib-kubelet-pods-16ae0a93\x2d1f5c\x2d4dfd\x2db70c\x2d84df7a6e2ce2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djhzp8.mount: Deactivated successfully. May 14 00:03:27.275989 systemd[1]: var-lib-kubelet-pods-16ae0a93\x2d1f5c\x2d4dfd\x2db70c\x2d84df7a6e2ce2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:03:27.276067 systemd[1]: var-lib-kubelet-pods-16ae0a93\x2d1f5c\x2d4dfd\x2db70c\x2d84df7a6e2ce2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:03:27.456201 kubelet[2636]: I0514 00:03:27.456162 2636 scope.go:117] "RemoveContainer" containerID="f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002" May 14 00:03:27.457318 containerd[1478]: time="2025-05-14T00:03:27.457283590Z" level=info msg="RemoveContainer for \"f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002\"" May 14 00:03:27.461044 containerd[1478]: time="2025-05-14T00:03:27.461011029Z" level=info msg="RemoveContainer for \"f038ce4612cff0ccd2de4252ad7fcdee7187f20a40396b7bd9b5f944ee3e1002\" returns successfully" May 14 00:03:27.462033 kubelet[2636]: I0514 00:03:27.461180 2636 scope.go:117] "RemoveContainer" containerID="0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044" May 14 00:03:27.462131 containerd[1478]: time="2025-05-14T00:03:27.462094540Z" level=info msg="RemoveContainer for \"0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044\"" May 14 00:03:27.463609 systemd[1]: Removed slice kubepods-burstable-pod16ae0a93_1f5c_4dfd_b70c_84df7a6e2ce2.slice - libcontainer container kubepods-burstable-pod16ae0a93_1f5c_4dfd_b70c_84df7a6e2ce2.slice. May 14 00:03:27.463740 systemd[1]: kubepods-burstable-pod16ae0a93_1f5c_4dfd_b70c_84df7a6e2ce2.slice: Consumed 7.718s CPU time, 125.7M memory peak, 244K read from disk, 13.3M written to disk. May 14 00:03:27.466546 containerd[1478]: time="2025-05-14T00:03:27.466430610Z" level=info msg="RemoveContainer for \"0c75a80ea4db84c718e0eb21e9fc6771af7c47ad42b70d54637abe6069ea7044\" returns successfully" May 14 00:03:27.466737 kubelet[2636]: I0514 00:03:27.466709 2636 scope.go:117] "RemoveContainer" containerID="854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491" May 14 00:03:27.467728 containerd[1478]: time="2025-05-14T00:03:27.467699693Z" level=info msg="RemoveContainer for \"854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491\"" May 14 00:03:27.471832 containerd[1478]: time="2025-05-14T00:03:27.471785610Z" level=info msg="RemoveContainer for \"854a56e6e5aa483e319517adba4fc8a16425621df99caf4a4d7c7fcaa02fc491\" returns successfully" May 14 00:03:27.471961 kubelet[2636]: I0514 00:03:27.471938 2636 scope.go:117] "RemoveContainer" containerID="c943f06a1f84b67ebadf7db4352924852da5a18732f2e78e0bcac7024342d695" May 14 00:03:27.473302 containerd[1478]: time="2025-05-14T00:03:27.473163188Z" level=info msg="RemoveContainer for \"c943f06a1f84b67ebadf7db4352924852da5a18732f2e78e0bcac7024342d695\"" May 14 00:03:27.477351 containerd[1478]: time="2025-05-14T00:03:27.477314608Z" level=info msg="RemoveContainer for \"c943f06a1f84b67ebadf7db4352924852da5a18732f2e78e0bcac7024342d695\" returns successfully" May 14 00:03:27.477522 kubelet[2636]: I0514 00:03:27.477491 2636 scope.go:117] "RemoveContainer" containerID="c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991" May 14 00:03:27.478519 containerd[1478]: time="2025-05-14T00:03:27.478471309Z" level=info msg="RemoveContainer for \"c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991\"" May 14 00:03:27.481687 containerd[1478]: time="2025-05-14T00:03:27.481657041Z" level=info msg="RemoveContainer for \"c06dde47619b2c4c32cd6632fa4f06552bf4fa69f0816501a3cea38e369ca991\" returns successfully" May 14 00:03:28.213615 kubelet[2636]: I0514 00:03:28.213559 2636 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" path="/var/lib/kubelet/pods/16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2/volumes" May 14 00:03:28.214425 kubelet[2636]: I0514 00:03:28.214394 2636 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602c567d-66cd-4137-8238-a3ae609b794e" path="/var/lib/kubelet/pods/602c567d-66cd-4137-8238-a3ae609b794e/volumes" May 14 00:03:28.228639 sshd[4284]: Connection closed by 10.0.0.1 port 55922 May 14 00:03:28.229239 sshd-session[4281]: pam_unix(sshd:session): session closed for user core May 14 00:03:28.244289 systemd[1]: sshd@23-10.0.0.111:22-10.0.0.1:55922.service: Deactivated successfully. May 14 00:03:28.246898 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:03:28.248798 systemd-logind[1462]: Session 24 logged out. Waiting for processes to exit. May 14 00:03:28.257864 systemd[1]: Started sshd@24-10.0.0.111:22-10.0.0.1:43584.service - OpenSSH per-connection server daemon (10.0.0.1:43584). May 14 00:03:28.259088 systemd-logind[1462]: Removed session 24. May 14 00:03:28.301531 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 43584 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:28.303453 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:28.308128 systemd-logind[1462]: New session 25 of user core. May 14 00:03:28.317650 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 00:03:28.843185 sshd[4447]: Connection closed by 10.0.0.1 port 43584 May 14 00:03:28.844100 sshd-session[4444]: pam_unix(sshd:session): session closed for user core May 14 00:03:28.855726 kubelet[2636]: E0514 00:03:28.855690 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" containerName="mount-cgroup" May 14 00:03:28.855726 kubelet[2636]: E0514 00:03:28.855717 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" containerName="clean-cilium-state" May 14 00:03:28.855726 kubelet[2636]: E0514 00:03:28.855724 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="602c567d-66cd-4137-8238-a3ae609b794e" containerName="cilium-operator" May 14 00:03:28.855726 kubelet[2636]: E0514 00:03:28.855730 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" containerName="mount-bpf-fs" May 14 00:03:28.855726 kubelet[2636]: E0514 00:03:28.855736 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" containerName="cilium-agent" May 14 00:03:28.856144 kubelet[2636]: E0514 00:03:28.855743 2636 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" containerName="apply-sysctl-overwrites" May 14 00:03:28.856144 kubelet[2636]: I0514 00:03:28.855776 2636 memory_manager.go:354] "RemoveStaleState removing state" podUID="602c567d-66cd-4137-8238-a3ae609b794e" containerName="cilium-operator" May 14 00:03:28.856144 kubelet[2636]: I0514 00:03:28.855782 2636 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ae0a93-1f5c-4dfd-b70c-84df7a6e2ce2" containerName="cilium-agent" May 14 00:03:28.859003 systemd[1]: sshd@24-10.0.0.111:22-10.0.0.1:43584.service: Deactivated successfully. May 14 00:03:28.861888 systemd[1]: session-25.scope: Deactivated successfully. May 14 00:03:28.864208 systemd-logind[1462]: Session 25 logged out. Waiting for processes to exit. May 14 00:03:28.878066 kubelet[2636]: I0514 00:03:28.877047 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-hostproc\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878066 kubelet[2636]: I0514 00:03:28.877578 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-lib-modules\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878066 kubelet[2636]: I0514 00:03:28.877603 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-hubble-tls\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878066 kubelet[2636]: I0514 00:03:28.877622 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-cilium-run\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878066 kubelet[2636]: I0514 00:03:28.877637 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-host-proc-sys-net\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878066 kubelet[2636]: I0514 00:03:28.877660 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-cni-path\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878290 kubelet[2636]: I0514 00:03:28.877677 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-xtables-lock\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878290 kubelet[2636]: I0514 00:03:28.877690 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-clustermesh-secrets\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878290 kubelet[2636]: I0514 00:03:28.877707 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-host-proc-sys-kernel\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878290 kubelet[2636]: I0514 00:03:28.877721 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r78vt\" (UniqueName: \"kubernetes.io/projected/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-kube-api-access-r78vt\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878290 kubelet[2636]: I0514 00:03:28.877767 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-cilium-cgroup\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878408 kubelet[2636]: I0514 00:03:28.877804 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-etc-cni-netd\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878408 kubelet[2636]: I0514 00:03:28.877831 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-cilium-config-path\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878408 kubelet[2636]: I0514 00:03:28.877851 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-cilium-ipsec-secrets\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878408 kubelet[2636]: I0514 00:03:28.877884 2636 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea-bpf-maps\") pod \"cilium-flqnh\" (UID: \"8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea\") " pod="kube-system/cilium-flqnh" May 14 00:03:28.878878 systemd[1]: Started sshd@25-10.0.0.111:22-10.0.0.1:43594.service - OpenSSH per-connection server daemon (10.0.0.1:43594). May 14 00:03:28.881727 systemd-logind[1462]: Removed session 25. May 14 00:03:28.885842 systemd[1]: Created slice kubepods-burstable-pod8bd7a5a8_d6b2_413f_a2ef_64c29bf01bea.slice - libcontainer container kubepods-burstable-pod8bd7a5a8_d6b2_413f_a2ef_64c29bf01bea.slice. May 14 00:03:28.918416 sshd[4459]: Accepted publickey for core from 10.0.0.1 port 43594 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:28.919836 sshd-session[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:28.924087 systemd-logind[1462]: New session 26 of user core. May 14 00:03:28.935668 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 00:03:28.988255 sshd[4462]: Connection closed by 10.0.0.1 port 43594 May 14 00:03:28.989736 sshd-session[4459]: pam_unix(sshd:session): session closed for user core May 14 00:03:28.995787 systemd[1]: sshd@25-10.0.0.111:22-10.0.0.1:43594.service: Deactivated successfully. May 14 00:03:28.997687 systemd[1]: session-26.scope: Deactivated successfully. May 14 00:03:29.010163 systemd-logind[1462]: Session 26 logged out. Waiting for processes to exit. May 14 00:03:29.011715 systemd[1]: Started sshd@26-10.0.0.111:22-10.0.0.1:43596.service - OpenSSH per-connection server daemon (10.0.0.1:43596). May 14 00:03:29.012533 systemd-logind[1462]: Removed session 26. May 14 00:03:29.053501 sshd[4472]: Accepted publickey for core from 10.0.0.1 port 43596 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:03:29.054967 sshd-session[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:29.059058 systemd-logind[1462]: New session 27 of user core. May 14 00:03:29.069666 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 00:03:29.191710 kubelet[2636]: E0514 00:03:29.191677 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:29.192279 containerd[1478]: time="2025-05-14T00:03:29.192192975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-flqnh,Uid:8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea,Namespace:kube-system,Attempt:0,}" May 14 00:03:29.214942 containerd[1478]: time="2025-05-14T00:03:29.214807198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:03:29.214942 containerd[1478]: time="2025-05-14T00:03:29.214870708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:03:29.214942 containerd[1478]: time="2025-05-14T00:03:29.214884956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:03:29.215163 containerd[1478]: time="2025-05-14T00:03:29.214986748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:03:29.240758 systemd[1]: Started cri-containerd-d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160.scope - libcontainer container d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160. May 14 00:03:29.265490 containerd[1478]: time="2025-05-14T00:03:29.265431398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-flqnh,Uid:8bd7a5a8-d6b2-413f-a2ef-64c29bf01bea,Namespace:kube-system,Attempt:0,} returns sandbox id \"d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160\"" May 14 00:03:29.266418 kubelet[2636]: E0514 00:03:29.266368 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:29.268412 containerd[1478]: time="2025-05-14T00:03:29.268382394Z" level=info msg="CreateContainer within sandbox \"d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:03:29.297444 containerd[1478]: time="2025-05-14T00:03:29.297385962Z" level=info msg="CreateContainer within sandbox \"d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"358f4224bfa4d811b3cd695837b16e8da09416223e81e23a792cd3938f7ec4d1\"" May 14 00:03:29.298016 containerd[1478]: time="2025-05-14T00:03:29.297984484Z" level=info msg="StartContainer for \"358f4224bfa4d811b3cd695837b16e8da09416223e81e23a792cd3938f7ec4d1\"" May 14 00:03:29.327692 systemd[1]: Started cri-containerd-358f4224bfa4d811b3cd695837b16e8da09416223e81e23a792cd3938f7ec4d1.scope - libcontainer container 358f4224bfa4d811b3cd695837b16e8da09416223e81e23a792cd3938f7ec4d1. May 14 00:03:29.355902 containerd[1478]: time="2025-05-14T00:03:29.355853367Z" level=info msg="StartContainer for \"358f4224bfa4d811b3cd695837b16e8da09416223e81e23a792cd3938f7ec4d1\" returns successfully" May 14 00:03:29.368171 systemd[1]: cri-containerd-358f4224bfa4d811b3cd695837b16e8da09416223e81e23a792cd3938f7ec4d1.scope: Deactivated successfully. May 14 00:03:29.402645 containerd[1478]: time="2025-05-14T00:03:29.402579800Z" level=info msg="shim disconnected" id=358f4224bfa4d811b3cd695837b16e8da09416223e81e23a792cd3938f7ec4d1 namespace=k8s.io May 14 00:03:29.402645 containerd[1478]: time="2025-05-14T00:03:29.402637098Z" level=warning msg="cleaning up after shim disconnected" id=358f4224bfa4d811b3cd695837b16e8da09416223e81e23a792cd3938f7ec4d1 namespace=k8s.io May 14 00:03:29.402645 containerd[1478]: time="2025-05-14T00:03:29.402645805Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:03:29.464424 kubelet[2636]: E0514 00:03:29.464267 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:29.467270 containerd[1478]: time="2025-05-14T00:03:29.467198599Z" level=info msg="CreateContainer within sandbox \"d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:03:29.481479 containerd[1478]: time="2025-05-14T00:03:29.481088008Z" level=info msg="CreateContainer within sandbox \"d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"981878c9f32ea5328be1a429053d8ede8dfb65ccd97ea90b1ca1c3e91d218c9f\"" May 14 00:03:29.484087 containerd[1478]: time="2025-05-14T00:03:29.483092031Z" level=info msg="StartContainer for \"981878c9f32ea5328be1a429053d8ede8dfb65ccd97ea90b1ca1c3e91d218c9f\"" May 14 00:03:29.511655 systemd[1]: Started cri-containerd-981878c9f32ea5328be1a429053d8ede8dfb65ccd97ea90b1ca1c3e91d218c9f.scope - libcontainer container 981878c9f32ea5328be1a429053d8ede8dfb65ccd97ea90b1ca1c3e91d218c9f. May 14 00:03:29.539843 containerd[1478]: time="2025-05-14T00:03:29.539796360Z" level=info msg="StartContainer for \"981878c9f32ea5328be1a429053d8ede8dfb65ccd97ea90b1ca1c3e91d218c9f\" returns successfully" May 14 00:03:29.548883 systemd[1]: cri-containerd-981878c9f32ea5328be1a429053d8ede8dfb65ccd97ea90b1ca1c3e91d218c9f.scope: Deactivated successfully. May 14 00:03:29.576124 containerd[1478]: time="2025-05-14T00:03:29.576061662Z" level=info msg="shim disconnected" id=981878c9f32ea5328be1a429053d8ede8dfb65ccd97ea90b1ca1c3e91d218c9f namespace=k8s.io May 14 00:03:29.576124 containerd[1478]: time="2025-05-14T00:03:29.576119261Z" level=warning msg="cleaning up after shim disconnected" id=981878c9f32ea5328be1a429053d8ede8dfb65ccd97ea90b1ca1c3e91d218c9f namespace=k8s.io May 14 00:03:29.576124 containerd[1478]: time="2025-05-14T00:03:29.576128007Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:03:30.280951 kubelet[2636]: E0514 00:03:30.280903 2636 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:03:30.466652 kubelet[2636]: E0514 00:03:30.466621 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:30.469572 containerd[1478]: time="2025-05-14T00:03:30.469250052Z" level=info msg="CreateContainer within sandbox \"d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:03:30.486633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3101108778.mount: Deactivated successfully. May 14 00:03:30.489154 containerd[1478]: time="2025-05-14T00:03:30.489113992Z" level=info msg="CreateContainer within sandbox \"d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6082a38f567f54ed7e29ffd8b81049aa12de118bc47346de540355ed9cd723d1\"" May 14 00:03:30.493532 containerd[1478]: time="2025-05-14T00:03:30.489709460Z" level=info msg="StartContainer for \"6082a38f567f54ed7e29ffd8b81049aa12de118bc47346de540355ed9cd723d1\"" May 14 00:03:30.532679 systemd[1]: Started cri-containerd-6082a38f567f54ed7e29ffd8b81049aa12de118bc47346de540355ed9cd723d1.scope - libcontainer container 6082a38f567f54ed7e29ffd8b81049aa12de118bc47346de540355ed9cd723d1. May 14 00:03:30.565099 containerd[1478]: time="2025-05-14T00:03:30.565065336Z" level=info msg="StartContainer for \"6082a38f567f54ed7e29ffd8b81049aa12de118bc47346de540355ed9cd723d1\" returns successfully" May 14 00:03:30.567047 systemd[1]: cri-containerd-6082a38f567f54ed7e29ffd8b81049aa12de118bc47346de540355ed9cd723d1.scope: Deactivated successfully. May 14 00:03:30.593339 containerd[1478]: time="2025-05-14T00:03:30.593271345Z" level=info msg="shim disconnected" id=6082a38f567f54ed7e29ffd8b81049aa12de118bc47346de540355ed9cd723d1 namespace=k8s.io May 14 00:03:30.593339 containerd[1478]: time="2025-05-14T00:03:30.593330046Z" level=warning msg="cleaning up after shim disconnected" id=6082a38f567f54ed7e29ffd8b81049aa12de118bc47346de540355ed9cd723d1 namespace=k8s.io May 14 00:03:30.593339 containerd[1478]: time="2025-05-14T00:03:30.593340937Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:03:30.983894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6082a38f567f54ed7e29ffd8b81049aa12de118bc47346de540355ed9cd723d1-rootfs.mount: Deactivated successfully. May 14 00:03:31.469930 kubelet[2636]: E0514 00:03:31.469896 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:31.471815 containerd[1478]: time="2025-05-14T00:03:31.471775322Z" level=info msg="CreateContainer within sandbox \"d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:03:31.487519 containerd[1478]: time="2025-05-14T00:03:31.487459408Z" level=info msg="CreateContainer within sandbox \"d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0645be56cd037f67586aacef50dfae9c43c8c897002cf47d420c1dbb6d37b23f\"" May 14 00:03:31.487963 containerd[1478]: time="2025-05-14T00:03:31.487935870Z" level=info msg="StartContainer for \"0645be56cd037f67586aacef50dfae9c43c8c897002cf47d420c1dbb6d37b23f\"" May 14 00:03:31.522652 systemd[1]: Started cri-containerd-0645be56cd037f67586aacef50dfae9c43c8c897002cf47d420c1dbb6d37b23f.scope - libcontainer container 0645be56cd037f67586aacef50dfae9c43c8c897002cf47d420c1dbb6d37b23f. May 14 00:03:31.546341 systemd[1]: cri-containerd-0645be56cd037f67586aacef50dfae9c43c8c897002cf47d420c1dbb6d37b23f.scope: Deactivated successfully. May 14 00:03:31.547903 containerd[1478]: time="2025-05-14T00:03:31.547869191Z" level=info msg="StartContainer for \"0645be56cd037f67586aacef50dfae9c43c8c897002cf47d420c1dbb6d37b23f\" returns successfully" May 14 00:03:31.570044 containerd[1478]: time="2025-05-14T00:03:31.569984105Z" level=info msg="shim disconnected" id=0645be56cd037f67586aacef50dfae9c43c8c897002cf47d420c1dbb6d37b23f namespace=k8s.io May 14 00:03:31.570044 containerd[1478]: time="2025-05-14T00:03:31.570044249Z" level=warning msg="cleaning up after shim disconnected" id=0645be56cd037f67586aacef50dfae9c43c8c897002cf47d420c1dbb6d37b23f namespace=k8s.io May 14 00:03:31.570351 containerd[1478]: time="2025-05-14T00:03:31.570055561Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:03:31.983989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0645be56cd037f67586aacef50dfae9c43c8c897002cf47d420c1dbb6d37b23f-rootfs.mount: Deactivated successfully. May 14 00:03:32.200824 kubelet[2636]: I0514 00:03:32.200766 2636 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T00:03:32Z","lastTransitionTime":"2025-05-14T00:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 00:03:32.473990 kubelet[2636]: E0514 00:03:32.473951 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:32.476142 containerd[1478]: time="2025-05-14T00:03:32.476093246Z" level=info msg="CreateContainer within sandbox \"d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:03:32.496626 containerd[1478]: time="2025-05-14T00:03:32.496572318Z" level=info msg="CreateContainer within sandbox \"d289773e884f5346ce6ee9e90e073445c939eb146fd6a8c73f64add803f65160\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89dc75b8987d0dabe30c5a7bb8d2fee3178542a41c0170316d82b8292d6b7e89\"" May 14 00:03:32.497096 containerd[1478]: time="2025-05-14T00:03:32.497070070Z" level=info msg="StartContainer for \"89dc75b8987d0dabe30c5a7bb8d2fee3178542a41c0170316d82b8292d6b7e89\"" May 14 00:03:32.528802 systemd[1]: Started cri-containerd-89dc75b8987d0dabe30c5a7bb8d2fee3178542a41c0170316d82b8292d6b7e89.scope - libcontainer container 89dc75b8987d0dabe30c5a7bb8d2fee3178542a41c0170316d82b8292d6b7e89. May 14 00:03:32.563244 containerd[1478]: time="2025-05-14T00:03:32.563126927Z" level=info msg="StartContainer for \"89dc75b8987d0dabe30c5a7bb8d2fee3178542a41c0170316d82b8292d6b7e89\" returns successfully" May 14 00:03:32.999343 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 14 00:03:33.478102 kubelet[2636]: E0514 00:03:33.478070 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:35.192964 kubelet[2636]: E0514 00:03:35.192679 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:35.214541 kubelet[2636]: E0514 00:03:35.211549 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:36.148094 systemd-networkd[1401]: lxc_health: Link UP May 14 00:03:36.148408 systemd-networkd[1401]: lxc_health: Gained carrier May 14 00:03:37.193584 kubelet[2636]: E0514 00:03:37.193334 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:37.215619 kubelet[2636]: I0514 00:03:37.212684 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-flqnh" podStartSLOduration=9.212668307 podStartE2EDuration="9.212668307s" podCreationTimestamp="2025-05-14 00:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:03:33.496892293 +0000 UTC m=+83.388611084" watchObservedRunningTime="2025-05-14 00:03:37.212668307 +0000 UTC m=+87.104387098" May 14 00:03:37.261681 systemd-networkd[1401]: lxc_health: Gained IPv6LL May 14 00:03:37.485956 kubelet[2636]: E0514 00:03:37.485819 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:38.487360 kubelet[2636]: E0514 00:03:38.487327 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:41.701815 sshd[4475]: Connection closed by 10.0.0.1 port 43596 May 14 00:03:41.702312 sshd-session[4472]: pam_unix(sshd:session): session closed for user core May 14 00:03:41.705953 systemd[1]: sshd@26-10.0.0.111:22-10.0.0.1:43596.service: Deactivated successfully. May 14 00:03:41.707851 systemd[1]: session-27.scope: Deactivated successfully. May 14 00:03:41.708498 systemd-logind[1462]: Session 27 logged out. Waiting for processes to exit. May 14 00:03:41.709418 systemd-logind[1462]: Removed session 27. May 14 00:03:42.211769 kubelet[2636]: E0514 00:03:42.211728 2636 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"