Dec 13 00:25:44.142967 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 20:55:10 -00 2025 Dec 13 00:25:44.143006 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:25:44.143022 kernel: BIOS-provided physical RAM map: Dec 13 00:25:44.143034 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 13 00:25:44.143045 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 13 00:25:44.143060 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Dec 13 00:25:44.143074 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 13 00:25:44.143087 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Dec 13 00:25:44.143098 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 13 00:25:44.143110 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 13 00:25:44.143122 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 13 00:25:44.143134 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 13 00:25:44.143146 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 13 00:25:44.143155 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 13 00:25:44.143179 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 13 00:25:44.143189 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 13 00:25:44.143199 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 00:25:44.143209 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 00:25:44.143221 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 00:25:44.143232 kernel: NX (Execute Disable) protection: active Dec 13 00:25:44.143242 kernel: APIC: Static calls initialized Dec 13 00:25:44.143252 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Dec 13 00:25:44.143262 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Dec 13 00:25:44.143272 kernel: extended physical RAM map: Dec 13 00:25:44.143282 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 13 00:25:44.143292 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 13 00:25:44.143302 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Dec 13 00:25:44.143312 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 13 00:25:44.143321 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Dec 13 00:25:44.143334 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Dec 13 00:25:44.143344 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Dec 13 00:25:44.143354 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Dec 13 00:25:44.143364 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Dec 13 00:25:44.143374 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 13 00:25:44.143385 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 13 00:25:44.143394 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 13 00:25:44.143404 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 13 00:25:44.143414 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 13 00:25:44.143424 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 13 00:25:44.143437 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 13 00:25:44.143469 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 13 00:25:44.143479 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 00:25:44.143490 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 00:25:44.143500 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 00:25:44.143513 kernel: efi: EFI v2.7 by EDK II Dec 13 00:25:44.143523 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Dec 13 00:25:44.143534 kernel: random: crng init done Dec 13 00:25:44.143544 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 13 00:25:44.143554 kernel: secureboot: Secure boot enabled Dec 13 00:25:44.143565 kernel: SMBIOS 2.8 present. Dec 13 00:25:44.143575 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 13 00:25:44.143586 kernel: DMI: Memory slots populated: 1/1 Dec 13 00:25:44.143596 kernel: Hypervisor detected: KVM Dec 13 00:25:44.143609 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 13 00:25:44.143620 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 00:25:44.143630 kernel: kvm-clock: using sched offset of 4965606529 cycles Dec 13 00:25:44.143641 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 00:25:44.143653 kernel: tsc: Detected 2794.748 MHz processor Dec 13 00:25:44.143665 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 00:25:44.143676 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 00:25:44.143687 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 13 00:25:44.143698 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 00:25:44.143712 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 00:25:44.143723 kernel: Using GB pages for direct mapping Dec 13 00:25:44.143734 kernel: ACPI: Early table checksum verification disabled Dec 13 00:25:44.143745 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Dec 13 00:25:44.143756 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 00:25:44.143767 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:44.143778 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:44.143792 kernel: ACPI: FACS 0x000000009BBDD000 000040 Dec 13 00:25:44.143803 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:44.143814 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:44.143824 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:44.143835 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:44.143846 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 00:25:44.143857 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Dec 13 00:25:44.143870 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Dec 13 00:25:44.143881 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Dec 13 00:25:44.143892 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Dec 13 00:25:44.143903 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Dec 13 00:25:44.143914 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Dec 13 00:25:44.143924 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Dec 13 00:25:44.143935 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Dec 13 00:25:44.143949 kernel: No NUMA configuration found Dec 13 00:25:44.143960 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Dec 13 00:25:44.143971 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Dec 13 00:25:44.143982 kernel: Zone ranges: Dec 13 00:25:44.143993 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 00:25:44.144004 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Dec 13 00:25:44.144015 kernel: Normal empty Dec 13 00:25:44.144026 kernel: Device empty Dec 13 00:25:44.144039 kernel: Movable zone start for each node Dec 13 00:25:44.144050 kernel: Early memory node ranges Dec 13 00:25:44.144060 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Dec 13 00:25:44.144071 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Dec 13 00:25:44.144082 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Dec 13 00:25:44.144093 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Dec 13 00:25:44.144104 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Dec 13 00:25:44.144114 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Dec 13 00:25:44.144128 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 00:25:44.144139 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Dec 13 00:25:44.144149 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 00:25:44.144161 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 00:25:44.144182 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 13 00:25:44.144193 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Dec 13 00:25:44.144204 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 00:25:44.144218 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 00:25:44.144228 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 00:25:44.144239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 00:25:44.144250 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 00:25:44.144261 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 00:25:44.144272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 00:25:44.144283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 00:25:44.144297 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 00:25:44.144308 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 00:25:44.144319 kernel: TSC deadline timer available Dec 13 00:25:44.144329 kernel: CPU topo: Max. logical packages: 1 Dec 13 00:25:44.144340 kernel: CPU topo: Max. logical dies: 1 Dec 13 00:25:44.144359 kernel: CPU topo: Max. dies per package: 1 Dec 13 00:25:44.144373 kernel: CPU topo: Max. threads per core: 1 Dec 13 00:25:44.144385 kernel: CPU topo: Num. cores per package: 4 Dec 13 00:25:44.144396 kernel: CPU topo: Num. threads per package: 4 Dec 13 00:25:44.144406 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 13 00:25:44.144420 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 00:25:44.144432 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 00:25:44.144598 kernel: kvm-guest: setup PV sched yield Dec 13 00:25:44.144613 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 13 00:25:44.144628 kernel: Booting paravirtualized kernel on KVM Dec 13 00:25:44.144640 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 00:25:44.144652 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 00:25:44.144663 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 13 00:25:44.145035 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 13 00:25:44.145048 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 00:25:44.145059 kernel: kvm-guest: PV spinlocks enabled Dec 13 00:25:44.145075 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 00:25:44.145088 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:25:44.145099 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 00:25:44.145111 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 00:25:44.145122 kernel: Fallback order for Node 0: 0 Dec 13 00:25:44.145134 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Dec 13 00:25:44.145145 kernel: Policy zone: DMA32 Dec 13 00:25:44.145159 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 00:25:44.145179 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 00:25:44.145191 kernel: ftrace: allocating 40103 entries in 157 pages Dec 13 00:25:44.145202 kernel: ftrace: allocated 157 pages with 5 groups Dec 13 00:25:44.145214 kernel: Dynamic Preempt: voluntary Dec 13 00:25:44.145225 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 00:25:44.145237 kernel: rcu: RCU event tracing is enabled. Dec 13 00:25:44.145248 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 00:25:44.145274 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 00:25:44.145287 kernel: Rude variant of Tasks RCU enabled. Dec 13 00:25:44.145299 kernel: Tracing variant of Tasks RCU enabled. Dec 13 00:25:44.145311 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 00:25:44.145321 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 00:25:44.145333 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:25:44.145344 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:25:44.145359 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:25:44.145371 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 00:25:44.145382 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 00:25:44.145394 kernel: Console: colour dummy device 80x25 Dec 13 00:25:44.145405 kernel: printk: legacy console [ttyS0] enabled Dec 13 00:25:44.145416 kernel: ACPI: Core revision 20240827 Dec 13 00:25:44.145428 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 00:25:44.145460 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 00:25:44.145472 kernel: x2apic enabled Dec 13 00:25:44.145484 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 00:25:44.145495 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 00:25:44.145507 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 00:25:44.145519 kernel: kvm-guest: setup PV IPIs Dec 13 00:25:44.145530 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 00:25:44.145546 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 13 00:25:44.145558 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 00:25:44.145569 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 00:25:44.145580 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 00:25:44.145592 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 00:25:44.145603 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 00:25:44.145615 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 00:25:44.145628 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 13 00:25:44.145640 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 00:25:44.145651 kernel: active return thunk: retbleed_return_thunk Dec 13 00:25:44.145663 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 00:25:44.145674 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 00:25:44.145686 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 00:25:44.145697 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 00:25:44.145712 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 00:25:44.145724 kernel: active return thunk: srso_return_thunk Dec 13 00:25:44.145736 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 00:25:44.145747 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 00:25:44.145758 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 00:25:44.145770 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 00:25:44.145781 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 00:25:44.145795 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 00:25:44.145807 kernel: Freeing SMP alternatives memory: 32K Dec 13 00:25:44.145818 kernel: pid_max: default: 32768 minimum: 301 Dec 13 00:25:44.145830 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 13 00:25:44.145841 kernel: landlock: Up and running. Dec 13 00:25:44.145853 kernel: SELinux: Initializing. Dec 13 00:25:44.145864 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 00:25:44.145879 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 00:25:44.145890 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 00:25:44.145902 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 00:25:44.145913 kernel: ... version: 0 Dec 13 00:25:44.145925 kernel: ... bit width: 48 Dec 13 00:25:44.145936 kernel: ... generic registers: 6 Dec 13 00:25:44.145947 kernel: ... value mask: 0000ffffffffffff Dec 13 00:25:44.145962 kernel: ... max period: 00007fffffffffff Dec 13 00:25:44.145973 kernel: ... fixed-purpose events: 0 Dec 13 00:25:44.145984 kernel: ... event mask: 000000000000003f Dec 13 00:25:44.145996 kernel: signal: max sigframe size: 1776 Dec 13 00:25:44.146007 kernel: rcu: Hierarchical SRCU implementation. Dec 13 00:25:44.146019 kernel: rcu: Max phase no-delay instances is 400. Dec 13 00:25:44.146030 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 13 00:25:44.146045 kernel: smp: Bringing up secondary CPUs ... Dec 13 00:25:44.146056 kernel: smpboot: x86: Booting SMP configuration: Dec 13 00:25:44.146067 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 00:25:44.146079 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 00:25:44.146090 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 00:25:44.146102 kernel: Memory: 2425596K/2552216K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15596K init, 2444K bss, 120680K reserved, 0K cma-reserved) Dec 13 00:25:44.146114 kernel: devtmpfs: initialized Dec 13 00:25:44.146128 kernel: x86/mm: Memory block size: 128MB Dec 13 00:25:44.146139 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Dec 13 00:25:44.146150 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Dec 13 00:25:44.146162 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 00:25:44.146183 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 00:25:44.146195 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 00:25:44.146207 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 00:25:44.146221 kernel: audit: initializing netlink subsys (disabled) Dec 13 00:25:44.146232 kernel: audit: type=2000 audit(1765585541.941:1): state=initialized audit_enabled=0 res=1 Dec 13 00:25:44.146243 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 00:25:44.146255 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 00:25:44.146266 kernel: cpuidle: using governor menu Dec 13 00:25:44.146277 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 00:25:44.146289 kernel: dca service started, version 1.12.1 Dec 13 00:25:44.146300 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 13 00:25:44.146314 kernel: PCI: Using configuration type 1 for base access Dec 13 00:25:44.146326 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 00:25:44.146337 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 00:25:44.146349 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 00:25:44.146360 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 00:25:44.146371 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 00:25:44.146383 kernel: ACPI: Added _OSI(Module Device) Dec 13 00:25:44.146397 kernel: ACPI: Added _OSI(Processor Device) Dec 13 00:25:44.146408 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 00:25:44.146419 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 00:25:44.146431 kernel: ACPI: Interpreter enabled Dec 13 00:25:44.146458 kernel: ACPI: PM: (supports S0 S5) Dec 13 00:25:44.146470 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 00:25:44.146481 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 00:25:44.146497 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 00:25:44.146508 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 00:25:44.146520 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 00:25:44.146799 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 00:25:44.147015 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 00:25:44.147247 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 00:25:44.147270 kernel: PCI host bridge to bus 0000:00 Dec 13 00:25:44.147501 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 00:25:44.147683 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 00:25:44.147839 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 00:25:44.148034 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 13 00:25:44.148248 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 13 00:25:44.148473 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 13 00:25:44.148677 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 00:25:44.148923 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 13 00:25:44.149151 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 13 00:25:44.149385 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 13 00:25:44.149620 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 13 00:25:44.149836 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 13 00:25:44.150077 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 00:25:44.150317 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 13 00:25:44.150555 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 13 00:25:44.150776 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 13 00:25:44.151006 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 13 00:25:44.151292 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 13 00:25:44.151562 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 13 00:25:44.151782 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 13 00:25:44.152002 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 13 00:25:44.152382 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 13 00:25:44.152613 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 13 00:25:44.152828 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 13 00:25:44.153035 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 13 00:25:44.153257 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 13 00:25:44.153496 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 13 00:25:44.153712 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 00:25:44.153934 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 13 00:25:44.154145 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 13 00:25:44.154364 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 13 00:25:44.154601 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 13 00:25:44.154821 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 13 00:25:44.154839 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 00:25:44.154851 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 00:25:44.154863 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 00:25:44.154875 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 00:25:44.154886 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 00:25:44.154898 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 00:25:44.154914 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 00:25:44.154926 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 00:25:44.154938 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 00:25:44.154949 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 00:25:44.154961 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 00:25:44.154973 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 00:25:44.154984 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 00:25:44.154999 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 00:25:44.155010 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 00:25:44.155021 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 00:25:44.155033 kernel: iommu: Default domain type: Translated Dec 13 00:25:44.155045 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 00:25:44.155056 kernel: efivars: Registered efivars operations Dec 13 00:25:44.155067 kernel: PCI: Using ACPI for IRQ routing Dec 13 00:25:44.155082 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 00:25:44.155093 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Dec 13 00:25:44.155104 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Dec 13 00:25:44.155122 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Dec 13 00:25:44.155137 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Dec 13 00:25:44.155165 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Dec 13 00:25:44.155649 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 00:25:44.156130 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 00:25:44.156377 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 00:25:44.156396 kernel: vgaarb: loaded Dec 13 00:25:44.156408 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 00:25:44.156420 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 00:25:44.156432 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 00:25:44.156462 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 00:25:44.156479 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 00:25:44.156492 kernel: pnp: PnP ACPI init Dec 13 00:25:44.156727 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 13 00:25:44.156746 kernel: pnp: PnP ACPI: found 6 devices Dec 13 00:25:44.156760 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 00:25:44.156772 kernel: NET: Registered PF_INET protocol family Dec 13 00:25:44.156784 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 00:25:44.156800 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 00:25:44.156812 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 00:25:44.156824 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 00:25:44.156835 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 00:25:44.156847 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 00:25:44.156858 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 00:25:44.156870 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 00:25:44.156885 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 00:25:44.156897 kernel: NET: Registered PF_XDP protocol family Dec 13 00:25:44.157116 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 13 00:25:44.157351 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 13 00:25:44.157575 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 00:25:44.157778 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 00:25:44.157988 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 00:25:44.158195 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 13 00:25:44.158394 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 13 00:25:44.158613 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 13 00:25:44.158631 kernel: PCI: CLS 0 bytes, default 64 Dec 13 00:25:44.158643 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 13 00:25:44.158655 kernel: Initialise system trusted keyrings Dec 13 00:25:44.158672 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 00:25:44.158696 kernel: Key type asymmetric registered Dec 13 00:25:44.158717 kernel: Asymmetric key parser 'x509' registered Dec 13 00:25:44.158749 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 00:25:44.158763 kernel: io scheduler mq-deadline registered Dec 13 00:25:44.158775 kernel: io scheduler kyber registered Dec 13 00:25:44.158787 kernel: io scheduler bfq registered Dec 13 00:25:44.158802 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 00:25:44.158815 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 00:25:44.158827 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 00:25:44.158838 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 00:25:44.158850 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 00:25:44.158862 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 00:25:44.158874 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 00:25:44.158890 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 00:25:44.158901 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 00:25:44.158913 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 00:25:44.159190 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 00:25:44.159399 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 00:25:44.159625 kernel: rtc_cmos 00:04: setting system clock to 2025-12-13T00:25:42 UTC (1765585542) Dec 13 00:25:44.159832 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 00:25:44.159850 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 00:25:44.159863 kernel: efifb: probing for efifb Dec 13 00:25:44.159875 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 13 00:25:44.159888 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 13 00:25:44.159900 kernel: efifb: scrolling: redraw Dec 13 00:25:44.159912 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 00:25:44.159930 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 00:25:44.159945 kernel: fb0: EFI VGA frame buffer device Dec 13 00:25:44.159957 kernel: pstore: Using crash dump compression: deflate Dec 13 00:25:44.159970 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 00:25:44.159982 kernel: NET: Registered PF_INET6 protocol family Dec 13 00:25:44.159997 kernel: Segment Routing with IPv6 Dec 13 00:25:44.160010 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 00:25:44.160022 kernel: NET: Registered PF_PACKET protocol family Dec 13 00:25:44.160034 kernel: Key type dns_resolver registered Dec 13 00:25:44.160046 kernel: IPI shorthand broadcast: enabled Dec 13 00:25:44.160058 kernel: sched_clock: Marking stable (1828002661, 299900543)->(2215683297, -87780093) Dec 13 00:25:44.160070 kernel: registered taskstats version 1 Dec 13 00:25:44.160085 kernel: Loading compiled-in X.509 certificates Dec 13 00:25:44.160098 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 199a9f6885410acbf0a1b178e5562253352ca03c' Dec 13 00:25:44.160110 kernel: Demotion targets for Node 0: null Dec 13 00:25:44.160122 kernel: Key type .fscrypt registered Dec 13 00:25:44.160134 kernel: Key type fscrypt-provisioning registered Dec 13 00:25:44.160146 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 00:25:44.160158 kernel: ima: Allocated hash algorithm: sha1 Dec 13 00:25:44.160184 kernel: ima: No architecture policies found Dec 13 00:25:44.160197 kernel: clk: Disabling unused clocks Dec 13 00:25:44.160209 kernel: Freeing unused kernel image (initmem) memory: 15596K Dec 13 00:25:44.160221 kernel: Write protecting the kernel read-only data: 47104k Dec 13 00:25:44.160233 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 13 00:25:44.160246 kernel: Run /init as init process Dec 13 00:25:44.160258 kernel: with arguments: Dec 13 00:25:44.160274 kernel: /init Dec 13 00:25:44.160286 kernel: with environment: Dec 13 00:25:44.160298 kernel: HOME=/ Dec 13 00:25:44.160310 kernel: TERM=linux Dec 13 00:25:44.160322 kernel: SCSI subsystem initialized Dec 13 00:25:44.160333 kernel: libata version 3.00 loaded. Dec 13 00:25:44.160576 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 00:25:44.160603 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 00:25:44.160823 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 13 00:25:44.161046 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 13 00:25:44.161276 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 00:25:44.161543 kernel: scsi host0: ahci Dec 13 00:25:44.161780 kernel: scsi host1: ahci Dec 13 00:25:44.162043 kernel: scsi host2: ahci Dec 13 00:25:44.162371 kernel: scsi host3: ahci Dec 13 00:25:44.162625 kernel: scsi host4: ahci Dec 13 00:25:44.162850 kernel: scsi host5: ahci Dec 13 00:25:44.162869 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Dec 13 00:25:44.162882 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Dec 13 00:25:44.162900 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Dec 13 00:25:44.162912 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Dec 13 00:25:44.162925 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Dec 13 00:25:44.162938 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Dec 13 00:25:44.162950 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 00:25:44.162962 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 00:25:44.162977 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 00:25:44.162990 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 00:25:44.163002 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 00:25:44.163014 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 00:25:44.163026 kernel: ata3.00: LPM support broken, forcing max_power Dec 13 00:25:44.163037 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 00:25:44.163050 kernel: ata3.00: applying bridge limits Dec 13 00:25:44.163062 kernel: ata3.00: LPM support broken, forcing max_power Dec 13 00:25:44.163076 kernel: ata3.00: configured for UDMA/100 Dec 13 00:25:44.163386 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 00:25:44.163671 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 00:25:44.163880 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 13 00:25:44.163898 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 00:25:44.163916 kernel: GPT:16515071 != 27000831 Dec 13 00:25:44.163928 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 00:25:44.163939 kernel: GPT:16515071 != 27000831 Dec 13 00:25:44.163951 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 00:25:44.163963 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 00:25:44.164200 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 00:25:44.164215 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 00:25:44.164405 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 00:25:44.164418 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 00:25:44.164427 kernel: device-mapper: uevent: version 1.0.3 Dec 13 00:25:44.164436 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 13 00:25:44.164459 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 13 00:25:44.164469 kernel: raid6: avx2x4 gen() 29919 MB/s Dec 13 00:25:44.164477 kernel: raid6: avx2x2 gen() 27814 MB/s Dec 13 00:25:44.164490 kernel: raid6: avx2x1 gen() 24162 MB/s Dec 13 00:25:44.164499 kernel: raid6: using algorithm avx2x4 gen() 29919 MB/s Dec 13 00:25:44.164508 kernel: raid6: .... xor() 7230 MB/s, rmw enabled Dec 13 00:25:44.164517 kernel: raid6: using avx2x2 recovery algorithm Dec 13 00:25:44.164526 kernel: xor: automatically using best checksumming function avx Dec 13 00:25:44.164535 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 00:25:44.164544 kernel: BTRFS: device fsid 0d9bdcaa-df05-4fc6-a68f-ebab7c5b281d devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (181) Dec 13 00:25:44.164555 kernel: BTRFS info (device dm-0): first mount of filesystem 0d9bdcaa-df05-4fc6-a68f-ebab7c5b281d Dec 13 00:25:44.164564 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:25:44.164573 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 00:25:44.164582 kernel: BTRFS info (device dm-0): enabling free space tree Dec 13 00:25:44.164591 kernel: loop: module loaded Dec 13 00:25:44.164600 kernel: loop0: detected capacity change from 0 to 100528 Dec 13 00:25:44.164609 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 00:25:44.164621 systemd[1]: Successfully made /usr/ read-only. Dec 13 00:25:44.164635 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 00:25:44.164645 systemd[1]: Detected virtualization kvm. Dec 13 00:25:44.164654 systemd[1]: Detected architecture x86-64. Dec 13 00:25:44.164663 systemd[1]: Running in initrd. Dec 13 00:25:44.164672 systemd[1]: No hostname configured, using default hostname. Dec 13 00:25:44.164684 systemd[1]: Hostname set to . Dec 13 00:25:44.164693 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 00:25:44.164703 systemd[1]: Queued start job for default target initrd.target. Dec 13 00:25:44.164712 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 00:25:44.164722 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:25:44.164732 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:25:44.164742 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 00:25:44.164753 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 00:25:44.164764 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 00:25:44.164777 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 00:25:44.164790 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:25:44.164804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:25:44.164819 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 13 00:25:44.164832 systemd[1]: Reached target paths.target - Path Units. Dec 13 00:25:44.164845 systemd[1]: Reached target slices.target - Slice Units. Dec 13 00:25:44.164859 systemd[1]: Reached target swap.target - Swaps. Dec 13 00:25:44.164871 systemd[1]: Reached target timers.target - Timer Units. Dec 13 00:25:44.164884 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 00:25:44.164897 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 00:25:44.164917 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:25:44.164933 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 00:25:44.164949 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 13 00:25:44.164965 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:25:44.164981 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 00:25:44.164997 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:25:44.165012 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 00:25:44.165032 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 00:25:44.165046 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 00:25:44.165056 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 00:25:44.165065 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 00:25:44.165075 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 13 00:25:44.165084 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 00:25:44.165098 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 00:25:44.165111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 00:25:44.165125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:25:44.165138 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 00:25:44.165154 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:25:44.165167 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 00:25:44.165190 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 00:25:44.165229 systemd-journald[315]: Collecting audit messages is enabled. Dec 13 00:25:44.165260 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 00:25:44.165272 kernel: Bridge firewalling registered Dec 13 00:25:44.165285 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 00:25:44.165297 systemd-journald[315]: Journal started Dec 13 00:25:44.165324 systemd-journald[315]: Runtime Journal (/run/log/journal/deb73ef0b9794406bb8d810344dbbacf) is 5.9M, max 47.8M, 41.8M free. Dec 13 00:25:44.163243 systemd-modules-load[318]: Inserted module 'br_netfilter' Dec 13 00:25:44.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.171467 kernel: audit: type=1130 audit(1765585544.166:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.171497 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 00:25:44.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.174510 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 00:25:44.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.182103 kernel: audit: type=1130 audit(1765585544.173:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.182127 kernel: audit: type=1130 audit(1765585544.177:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.190222 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 00:25:44.191714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 00:25:44.195560 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 00:25:44.209503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:44.217406 kernel: audit: type=1130 audit(1765585544.209:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.215465 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 00:25:44.227939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:25:44.235087 kernel: audit: type=1130 audit(1765585544.228:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.228102 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 13 00:25:44.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.239631 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:25:44.245494 kernel: audit: type=1130 audit(1765585544.239:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.247544 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:25:44.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.247000 audit: BPF prog-id=6 op=LOAD Dec 13 00:25:44.253100 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 00:25:44.256954 kernel: audit: type=1130 audit(1765585544.247:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.256982 kernel: audit: type=1334 audit(1765585544.247:9): prog-id=6 op=LOAD Dec 13 00:25:44.256627 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 00:25:44.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.258773 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 00:25:44.270796 kernel: audit: type=1130 audit(1765585544.256:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.299040 dracut-cmdline[357]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:25:44.347735 systemd-resolved[356]: Positive Trust Anchors: Dec 13 00:25:44.347760 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 00:25:44.347765 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 00:25:44.347806 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 00:25:44.385227 systemd-resolved[356]: Defaulting to hostname 'linux'. Dec 13 00:25:44.388304 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 00:25:44.391092 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:25:44.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.458484 kernel: Loading iSCSI transport class v2.0-870. Dec 13 00:25:44.475491 kernel: iscsi: registered transport (tcp) Dec 13 00:25:44.499467 kernel: iscsi: registered transport (qla4xxx) Dec 13 00:25:44.499529 kernel: QLogic iSCSI HBA Driver Dec 13 00:25:44.527972 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 00:25:44.562872 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:25:44.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.569272 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 00:25:44.630666 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 00:25:44.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.636483 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 00:25:44.640727 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 00:25:44.689870 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 00:25:44.693650 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:25:44.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.692000 audit: BPF prog-id=7 op=LOAD Dec 13 00:25:44.692000 audit: BPF prog-id=8 op=LOAD Dec 13 00:25:44.733675 systemd-udevd[599]: Using default interface naming scheme 'v257'. Dec 13 00:25:44.750072 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:25:44.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.752314 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 00:25:44.779210 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 00:25:44.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.782000 audit: BPF prog-id=9 op=LOAD Dec 13 00:25:44.785490 dracut-pre-trigger[666]: rd.md=0: removing MD RAID activation Dec 13 00:25:44.783250 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 00:25:44.819837 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 00:25:44.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.823777 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 00:25:44.845653 systemd-networkd[707]: lo: Link UP Dec 13 00:25:44.845663 systemd-networkd[707]: lo: Gained carrier Dec 13 00:25:44.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.846250 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 00:25:44.849071 systemd[1]: Reached target network.target - Network. Dec 13 00:25:44.920558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:25:44.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:44.925025 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 00:25:44.983265 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 00:25:45.006735 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 00:25:45.021466 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 00:25:45.034024 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 00:25:45.058878 systemd-networkd[707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:25:45.058891 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 00:25:45.061342 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 00:25:45.072196 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 13 00:25:45.062712 systemd-networkd[707]: eth0: Link UP Dec 13 00:25:45.063033 systemd-networkd[707]: eth0: Gained carrier Dec 13 00:25:45.063056 systemd-networkd[707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:25:45.073282 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 00:25:45.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:45.074037 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:25:45.074303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:45.081892 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:25:45.093465 kernel: AES CTR mode by8 optimization enabled Dec 13 00:25:45.098507 systemd-networkd[707]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 00:25:45.101718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:25:45.118055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:25:45.118180 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:45.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:45.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:45.124882 disk-uuid[836]: Primary Header is updated. Dec 13 00:25:45.124882 disk-uuid[836]: Secondary Entries is updated. Dec 13 00:25:45.124882 disk-uuid[836]: Secondary Header is updated. Dec 13 00:25:45.128875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:25:45.161300 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:45.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:45.216202 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 00:25:45.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:45.217872 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 00:25:45.220415 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:25:45.220951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 00:25:45.223255 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 00:25:45.251133 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 00:25:45.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.221955 disk-uuid[838]: Warning: The kernel is still using the old partition table. Dec 13 00:25:46.221955 disk-uuid[838]: The new table will be used at the next reboot or after you Dec 13 00:25:46.221955 disk-uuid[838]: run partprobe(8) or kpartx(8) Dec 13 00:25:46.221955 disk-uuid[838]: The operation has completed successfully. Dec 13 00:25:46.293664 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 00:25:46.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.293801 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 00:25:46.309629 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 00:25:46.309655 kernel: audit: type=1130 audit(1765585546.293:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.309671 kernel: audit: type=1131 audit(1765585546.293:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.296038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 00:25:46.371874 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Dec 13 00:25:46.371953 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:25:46.371965 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:25:46.376906 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:25:46.376928 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:25:46.385472 kernel: BTRFS info (device vda6): last unmount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:25:46.386035 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 00:25:46.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.390343 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 00:25:46.396363 kernel: audit: type=1130 audit(1765585546.389:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.528624 ignition[884]: Ignition 2.24.0 Dec 13 00:25:46.528644 ignition[884]: Stage: fetch-offline Dec 13 00:25:46.528709 ignition[884]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:25:46.528725 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:25:46.528857 ignition[884]: parsed url from cmdline: "" Dec 13 00:25:46.528861 ignition[884]: no config URL provided Dec 13 00:25:46.528967 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 00:25:46.528982 ignition[884]: no config at "/usr/lib/ignition/user.ign" Dec 13 00:25:46.529034 ignition[884]: op(1): [started] loading QEMU firmware config module Dec 13 00:25:46.529039 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 00:25:46.550340 ignition[884]: op(1): [finished] loading QEMU firmware config module Dec 13 00:25:46.579014 ignition[884]: parsing config with SHA512: ad8af87417ea1658e6738f961824b27894ffcb6819a14b316b333091fc7bde7934654f1ce2657a4602404c55cb99ca0369940f2ba2a9a96263dd389239070071 Dec 13 00:25:46.582835 unknown[884]: fetched base config from "system" Dec 13 00:25:46.582852 unknown[884]: fetched user config from "qemu" Dec 13 00:25:46.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.583334 ignition[884]: fetch-offline: fetch-offline passed Dec 13 00:25:46.601394 kernel: audit: type=1130 audit(1765585546.590:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.586422 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 00:25:46.583387 ignition[884]: Ignition finished successfully Dec 13 00:25:46.591343 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 00:25:46.592384 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 00:25:46.630745 ignition[894]: Ignition 2.24.0 Dec 13 00:25:46.630758 ignition[894]: Stage: kargs Dec 13 00:25:46.630911 ignition[894]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:25:46.630924 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:25:46.631704 ignition[894]: kargs: kargs passed Dec 13 00:25:46.631744 ignition[894]: Ignition finished successfully Dec 13 00:25:46.638729 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 00:25:46.646006 kernel: audit: type=1130 audit(1765585546.639:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.642569 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 00:25:46.677684 ignition[901]: Ignition 2.24.0 Dec 13 00:25:46.677698 ignition[901]: Stage: disks Dec 13 00:25:46.677870 ignition[901]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:25:46.677882 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:25:46.678662 ignition[901]: disks: disks passed Dec 13 00:25:46.690787 kernel: audit: type=1130 audit(1765585546.684:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:46.683692 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 00:25:46.678706 ignition[901]: Ignition finished successfully Dec 13 00:25:46.685415 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 00:25:46.691401 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 00:25:46.692028 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 00:25:46.699665 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 00:25:46.700357 systemd[1]: Reached target basic.target - Basic System. Dec 13 00:25:46.712293 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 00:25:46.750075 systemd-fsck[910]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 13 00:25:46.908634 systemd-networkd[707]: eth0: Gained IPv6LL Dec 13 00:25:47.133878 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 00:25:47.143676 kernel: audit: type=1130 audit(1765585547.136:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:47.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:47.139603 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 00:25:47.256484 kernel: EXT4-fs (vda9): mounted filesystem fc518408-2cc6-461e-9cc3-fcafcb4d05ba r/w with ordered data mode. Quota mode: none. Dec 13 00:25:47.256708 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 00:25:47.258871 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 00:25:47.262993 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 00:25:47.265951 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 00:25:47.267584 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 00:25:47.267631 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 00:25:47.267665 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 00:25:47.284354 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 00:25:47.292521 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (918) Dec 13 00:25:47.296751 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:25:47.296772 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:25:47.296784 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:25:47.296796 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:25:47.286889 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 00:25:47.297997 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 00:25:47.464608 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 00:25:47.471823 kernel: audit: type=1130 audit(1765585547.464:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:47.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:47.466776 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 00:25:47.486347 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 00:25:47.496275 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 00:25:47.499018 kernel: BTRFS info (device vda6): last unmount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:25:47.523996 ignition[1014]: INFO : Ignition 2.24.0 Dec 13 00:25:47.523996 ignition[1014]: INFO : Stage: mount Dec 13 00:25:47.527202 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:25:47.527202 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:25:47.527202 ignition[1014]: INFO : mount: mount passed Dec 13 00:25:47.527202 ignition[1014]: INFO : Ignition finished successfully Dec 13 00:25:47.534246 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 00:25:47.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:47.537398 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 00:25:47.547568 kernel: audit: type=1130 audit(1765585547.536:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:47.547606 kernel: audit: type=1130 audit(1765585547.541:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:47.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:47.549063 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 00:25:47.572498 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 00:25:47.604161 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1029) Dec 13 00:25:47.604198 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:25:47.604211 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:25:47.610133 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:25:47.610153 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:25:47.611959 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 00:25:47.645973 ignition[1046]: INFO : Ignition 2.24.0 Dec 13 00:25:47.645973 ignition[1046]: INFO : Stage: files Dec 13 00:25:47.648896 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:25:47.648896 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:25:47.648896 ignition[1046]: DEBUG : files: compiled without relabeling support, skipping Dec 13 00:25:47.648896 ignition[1046]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 00:25:47.648896 ignition[1046]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 00:25:47.659758 ignition[1046]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 00:25:47.659758 ignition[1046]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 00:25:47.659758 ignition[1046]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 00:25:47.659758 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 13 00:25:47.659758 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 13 00:25:47.653651 unknown[1046]: wrote ssh authorized keys file for user: core Dec 13 00:25:47.851975 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 00:25:48.026628 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 13 00:25:48.026628 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 00:25:48.033603 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 00:25:48.033603 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 00:25:48.033603 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 00:25:48.033603 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 00:25:48.033603 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 00:25:48.033603 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 00:25:48.033603 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 00:25:48.033603 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 00:25:48.033603 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 00:25:48.033603 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 13 00:25:48.064583 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 13 00:25:48.064583 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 13 00:25:48.064583 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 13 00:25:48.309606 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 00:25:49.128315 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 13 00:25:49.128315 ignition[1046]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 00:25:49.137660 ignition[1046]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 00:25:49.527702 ignition[1046]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 00:25:49.527702 ignition[1046]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 00:25:49.527702 ignition[1046]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 00:25:49.527702 ignition[1046]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 00:25:49.572389 ignition[1046]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 00:25:49.572389 ignition[1046]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 00:25:49.572389 ignition[1046]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 00:25:49.591901 ignition[1046]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 00:25:49.602804 ignition[1046]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 00:25:49.605566 ignition[1046]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 00:25:49.605566 ignition[1046]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 00:25:49.605566 ignition[1046]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 00:25:49.605566 ignition[1046]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 00:25:49.605566 ignition[1046]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 00:25:49.605566 ignition[1046]: INFO : files: files passed Dec 13 00:25:49.605566 ignition[1046]: INFO : Ignition finished successfully Dec 13 00:25:49.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.611177 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 00:25:49.620458 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 00:25:49.626639 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 00:25:49.637935 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 00:25:49.638081 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 00:25:49.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.645734 initrd-setup-root-after-ignition[1078]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 00:25:49.650273 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:25:49.650273 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:25:49.653963 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:25:49.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.653349 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 00:25:49.657647 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 00:25:49.659180 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 00:25:49.712829 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 00:25:49.712959 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 00:25:49.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.726000 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 00:25:49.731537 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 00:25:49.732464 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 00:25:49.739840 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 00:25:49.775838 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 00:25:49.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.803246 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 00:25:49.824477 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 00:25:49.824686 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:25:49.825930 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:25:49.830919 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 00:25:49.834220 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 00:25:49.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.834357 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 00:25:49.884043 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 00:25:49.887414 systemd[1]: Stopped target basic.target - Basic System. Dec 13 00:25:49.888283 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 00:25:49.892201 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 00:25:49.895515 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 00:25:49.899058 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 13 00:25:49.902404 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 00:25:49.905829 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 00:25:49.908958 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 00:25:49.913063 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 00:25:49.915893 systemd[1]: Stopped target swap.target - Swaps. Dec 13 00:25:49.920771 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 00:25:49.920935 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 00:25:49.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.925660 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:25:49.926416 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:25:49.929991 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 00:25:49.945904 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:25:49.949304 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 00:25:49.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.949413 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 00:25:49.954558 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 00:25:49.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.954674 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 00:25:49.958048 systemd[1]: Stopped target paths.target - Path Units. Dec 13 00:25:49.960878 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 00:25:49.966541 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:25:49.967478 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 00:25:49.972326 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 00:25:49.973220 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 00:25:49.973320 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 00:25:49.977035 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 00:25:49.977119 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 00:25:49.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.979829 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 13 00:25:49.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.979905 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:25:49.983164 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 00:25:49.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.983286 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 00:25:49.985861 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 00:25:49.985986 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 00:25:49.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.990149 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 00:25:50.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.992249 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 00:25:50.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.992369 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:25:49.993615 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 00:25:49.999219 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 00:25:49.999390 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:25:50.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.000133 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 00:25:50.000232 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:25:50.006965 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 00:25:50.007101 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 00:25:50.017228 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 00:25:50.017335 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 00:25:50.036737 ignition[1104]: INFO : Ignition 2.24.0 Dec 13 00:25:50.036737 ignition[1104]: INFO : Stage: umount Dec 13 00:25:50.039277 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:25:50.039277 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:25:50.039277 ignition[1104]: INFO : umount: umount passed Dec 13 00:25:50.039277 ignition[1104]: INFO : Ignition finished successfully Dec 13 00:25:50.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.043415 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 00:25:50.044003 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 00:25:50.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.044144 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 00:25:50.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.045431 systemd[1]: Stopped target network.target - Network. Dec 13 00:25:50.048410 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 00:25:50.048490 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 00:25:50.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.051324 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 00:25:50.051375 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 00:25:50.054803 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 00:25:50.054855 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 00:25:50.092250 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 00:25:50.092303 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 00:25:50.093219 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 00:25:50.098497 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 00:25:50.112094 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 00:25:50.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.112229 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 00:25:50.119351 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 00:25:50.119517 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 00:25:50.131000 audit: BPF prog-id=9 op=UNLOAD Dec 13 00:25:50.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.133000 audit: BPF prog-id=6 op=UNLOAD Dec 13 00:25:50.133968 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 13 00:25:50.135801 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 00:25:50.135844 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:25:50.140078 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 00:25:50.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.141375 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 00:25:50.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.141431 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 00:25:50.142279 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 00:25:50.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.142336 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:25:50.149418 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 00:25:50.149496 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 00:25:50.150426 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:25:50.152831 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 00:25:50.169631 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 00:25:50.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.171919 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 00:25:50.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.172002 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 00:25:50.179669 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 00:25:50.180007 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:25:50.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.181500 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 00:25:50.181598 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 00:25:50.185903 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 00:25:50.185982 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:25:50.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.191798 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 00:25:50.191900 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 00:25:50.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.194057 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 00:25:50.194147 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 00:25:50.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.233924 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 00:25:50.233979 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 00:25:50.279257 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 00:25:50.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.280909 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 13 00:25:50.280965 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:25:50.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.284834 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 00:25:50.284891 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:25:50.294389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:25:50.296118 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:50.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.306505 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 00:25:50.306674 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 00:25:50.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.314923 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 00:25:50.315095 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 00:25:50.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:50.346436 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 00:25:50.352551 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 00:25:50.373487 systemd[1]: Switching root. Dec 13 00:25:50.416465 systemd-journald[315]: Journal stopped Dec 13 00:25:52.218217 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Dec 13 00:25:52.218300 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 00:25:52.218320 kernel: SELinux: policy capability open_perms=1 Dec 13 00:25:52.218347 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 00:25:52.218368 kernel: SELinux: policy capability always_check_network=0 Dec 13 00:25:52.218384 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 00:25:52.218401 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 00:25:52.218417 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 00:25:52.218436 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 00:25:52.219968 kernel: SELinux: policy capability userspace_initial_context=0 Dec 13 00:25:52.219989 systemd[1]: Successfully loaded SELinux policy in 98.115ms. Dec 13 00:25:52.220022 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.469ms. Dec 13 00:25:52.220042 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 00:25:52.220061 systemd[1]: Detected virtualization kvm. Dec 13 00:25:52.220078 systemd[1]: Detected architecture x86-64. Dec 13 00:25:52.220099 systemd[1]: Detected first boot. Dec 13 00:25:52.220123 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 00:25:52.220142 zram_generator::config[1148]: No configuration found. Dec 13 00:25:52.220161 kernel: Guest personality initialized and is inactive Dec 13 00:25:52.220185 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 13 00:25:52.220202 kernel: Initialized host personality Dec 13 00:25:52.220219 kernel: NET: Registered PF_VSOCK protocol family Dec 13 00:25:52.220240 systemd[1]: Populated /etc with preset unit settings. Dec 13 00:25:52.220258 kernel: kauditd_printk_skb: 49 callbacks suppressed Dec 13 00:25:52.220274 kernel: audit: type=1334 audit(1765585551.809:88): prog-id=12 op=LOAD Dec 13 00:25:52.220291 kernel: audit: type=1334 audit(1765585551.809:89): prog-id=3 op=UNLOAD Dec 13 00:25:52.220308 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 00:25:52.220326 kernel: audit: type=1334 audit(1765585551.809:90): prog-id=13 op=LOAD Dec 13 00:25:52.220344 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 00:25:52.220366 kernel: audit: type=1334 audit(1765585551.810:91): prog-id=14 op=LOAD Dec 13 00:25:52.220383 kernel: audit: type=1334 audit(1765585551.810:92): prog-id=4 op=UNLOAD Dec 13 00:25:52.220399 kernel: audit: type=1334 audit(1765585551.810:93): prog-id=5 op=UNLOAD Dec 13 00:25:52.220420 kernel: audit: type=1131 audit(1765585551.811:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.220461 kernel: audit: type=1130 audit(1765585551.828:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.220482 kernel: audit: type=1131 audit(1765585551.829:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.220503 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 00:25:52.220521 kernel: audit: type=1334 audit(1765585551.838:97): prog-id=12 op=UNLOAD Dec 13 00:25:52.220544 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 00:25:52.220562 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 00:25:52.220580 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 00:25:52.220597 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 00:25:52.220618 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 00:25:52.220635 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 00:25:52.220652 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 00:25:52.220669 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 00:25:52.220686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:25:52.220704 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:25:52.220721 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 00:25:52.220741 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 00:25:52.220764 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 00:25:52.220781 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 00:25:52.220798 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 00:25:52.220815 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:25:52.220832 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:25:52.220849 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 00:25:52.220869 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 00:25:52.220886 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 00:25:52.220903 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 00:25:52.220921 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:25:52.220937 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 00:25:52.220965 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 13 00:25:52.220983 systemd[1]: Reached target slices.target - Slice Units. Dec 13 00:25:52.221002 systemd[1]: Reached target swap.target - Swaps. Dec 13 00:25:52.221020 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 00:25:52.221036 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 00:25:52.221053 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 13 00:25:52.221070 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:25:52.221087 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 13 00:25:52.221104 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:25:52.221127 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 13 00:25:52.221150 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 13 00:25:52.221170 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 00:25:52.221187 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:25:52.221206 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 00:25:52.221223 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 00:25:52.221241 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 00:25:52.221258 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 00:25:52.221276 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:52.221294 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 00:25:52.221314 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 00:25:52.221334 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 00:25:52.221354 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 00:25:52.221373 systemd[1]: Reached target machines.target - Containers. Dec 13 00:25:52.221391 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 00:25:52.221410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:25:52.221428 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 00:25:52.221552 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 00:25:52.221575 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:25:52.221592 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 00:25:52.221610 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:25:52.221627 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 00:25:52.221644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:25:52.221664 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 00:25:52.221685 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 00:25:52.221703 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 00:25:52.221720 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 00:25:52.221737 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 00:25:52.221755 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:25:52.221774 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 00:25:52.221791 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 00:25:52.221808 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 00:25:52.221850 systemd-journald[1211]: Collecting audit messages is enabled. Dec 13 00:25:52.221887 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 00:25:52.221908 kernel: fuse: init (API version 7.41) Dec 13 00:25:52.221926 systemd-journald[1211]: Journal started Dec 13 00:25:52.221966 systemd-journald[1211]: Runtime Journal (/run/log/journal/deb73ef0b9794406bb8d810344dbbacf) is 5.9M, max 47.8M, 41.8M free. Dec 13 00:25:51.970000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 00:25:52.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.179000 audit: BPF prog-id=14 op=UNLOAD Dec 13 00:25:52.179000 audit: BPF prog-id=13 op=UNLOAD Dec 13 00:25:52.180000 audit: BPF prog-id=15 op=LOAD Dec 13 00:25:52.180000 audit: BPF prog-id=16 op=LOAD Dec 13 00:25:52.180000 audit: BPF prog-id=17 op=LOAD Dec 13 00:25:52.212000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 00:25:52.212000 audit[1211]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffddf5400e0 a2=4000 a3=0 items=0 ppid=1 pid=1211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:52.212000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 00:25:51.784956 systemd[1]: Queued start job for default target multi-user.target. Dec 13 00:25:51.811279 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 00:25:51.811906 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 00:25:52.236724 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 13 00:25:52.269845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 00:25:52.274468 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:52.274507 kernel: ACPI: bus type drm_connector registered Dec 13 00:25:52.278588 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 00:25:52.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.280927 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 00:25:52.282660 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 00:25:52.284470 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 00:25:52.286113 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 00:25:52.287947 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 00:25:52.289798 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 00:25:52.291623 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:25:52.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.293851 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 00:25:52.294067 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 00:25:52.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.296189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:25:52.296394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:25:52.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.336005 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 00:25:52.336256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 00:25:52.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.339038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:25:52.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.339242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:25:52.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.341535 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 00:25:52.341732 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 00:25:52.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.343866 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:25:52.344083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:25:52.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.346135 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 00:25:52.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.348477 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:25:52.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.351633 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 00:25:52.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.354122 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 13 00:25:52.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.369439 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 00:25:52.371733 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 13 00:25:52.375109 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 00:25:52.378158 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 00:25:52.378962 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 00:25:52.379076 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 00:25:52.382979 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 13 00:25:52.385919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:25:52.386081 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:25:52.390199 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 00:25:52.394610 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 00:25:52.424882 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 00:25:52.426647 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 00:25:52.428995 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 00:25:52.435719 systemd-journald[1211]: Time spent on flushing to /var/log/journal/deb73ef0b9794406bb8d810344dbbacf is 19.840ms for 1164 entries. Dec 13 00:25:52.435719 systemd-journald[1211]: System Journal (/var/log/journal/deb73ef0b9794406bb8d810344dbbacf) is 8M, max 163.5M, 155.5M free. Dec 13 00:25:53.130815 systemd-journald[1211]: Received client request to flush runtime journal. Dec 13 00:25:53.131011 kernel: loop1: detected capacity change from 0 to 171112 Dec 13 00:25:53.131057 kernel: loop1: p1 p2 p3 Dec 13 00:25:53.131083 kernel: erofs: (device loop1p1): mounted with root inode @ nid 39. Dec 13 00:25:53.131108 kernel: loop2: detected capacity change from 0 to 229808 Dec 13 00:25:53.131376 kernel: loop3: detected capacity change from 0 to 375256 Dec 13 00:25:53.131463 kernel: loop3: p1 p2 p3 Dec 13 00:25:53.131552 kernel: erofs: (device loop3p1): mounted with root inode @ nid 39. Dec 13 00:25:53.131572 kernel: loop4: detected capacity change from 0 to 171112 Dec 13 00:25:53.131594 kernel: loop4: p1 p2 p3 Dec 13 00:25:53.131624 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:25:53.131646 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 00:25:53.131674 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Dec 13 00:25:53.131695 kernel: device-mapper: ioctl: error adding target to table Dec 13 00:25:53.131716 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:25:52.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.436211 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 00:25:52.444193 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 00:25:52.448095 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:25:52.450331 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 00:25:52.452409 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 00:25:52.576666 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:25:52.755784 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 00:25:52.782732 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 00:25:52.786477 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 13 00:25:53.006114 (sd-merge)[1274]: device-mapper: reload ioctl on af67e6a29067aeda0590a0009488436dd8f718bac6be743160aad6f147c2927f-verity (253:1) failed: Invalid argument Dec 13 00:25:53.133981 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 00:25:53.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.136832 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 00:25:53.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.145290 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 00:25:53.168162 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 00:25:53.175808 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 13 00:25:53.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.295038 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 00:25:53.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.299000 audit: BPF prog-id=18 op=LOAD Dec 13 00:25:53.299000 audit: BPF prog-id=19 op=LOAD Dec 13 00:25:53.299000 audit: BPF prog-id=20 op=LOAD Dec 13 00:25:53.300814 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 13 00:25:53.304000 audit: BPF prog-id=21 op=LOAD Dec 13 00:25:53.305652 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 00:25:53.311591 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 00:25:53.322000 audit: BPF prog-id=22 op=LOAD Dec 13 00:25:53.322000 audit: BPF prog-id=23 op=LOAD Dec 13 00:25:53.322000 audit: BPF prog-id=24 op=LOAD Dec 13 00:25:53.323826 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 13 00:25:53.326000 audit: BPF prog-id=25 op=LOAD Dec 13 00:25:53.327000 audit: BPF prog-id=26 op=LOAD Dec 13 00:25:53.327000 audit: BPF prog-id=27 op=LOAD Dec 13 00:25:53.328373 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 00:25:53.347762 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Dec 13 00:25:53.347782 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Dec 13 00:25:53.355025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:25:53.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.459994 systemd-nsresourced[1294]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 13 00:25:53.462014 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 13 00:25:53.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.466716 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 00:25:53.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.520746 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 00:25:53.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.523000 audit: BPF prog-id=8 op=UNLOAD Dec 13 00:25:53.523000 audit: BPF prog-id=7 op=UNLOAD Dec 13 00:25:53.524000 audit: BPF prog-id=28 op=LOAD Dec 13 00:25:53.524000 audit: BPF prog-id=29 op=LOAD Dec 13 00:25:53.525968 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:25:53.555413 systemd-oomd[1291]: No swap; memory pressure usage will be degraded Dec 13 00:25:53.556311 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 13 00:25:53.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.574828 systemd-udevd[1313]: Using default interface naming scheme 'v257'. Dec 13 00:25:53.577071 systemd-resolved[1292]: Positive Trust Anchors: Dec 13 00:25:53.577090 systemd-resolved[1292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 00:25:53.577096 systemd-resolved[1292]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 00:25:53.577139 systemd-resolved[1292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 00:25:53.583065 systemd-resolved[1292]: Defaulting to hostname 'linux'. Dec 13 00:25:53.585008 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 00:25:53.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.588021 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:25:53.630191 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:25:53.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.635000 audit: BPF prog-id=30 op=LOAD Dec 13 00:25:53.636537 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 00:25:53.739181 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 00:25:53.757098 systemd-networkd[1320]: lo: Link UP Dec 13 00:25:53.757114 systemd-networkd[1320]: lo: Gained carrier Dec 13 00:25:53.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.759259 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 00:25:53.759501 systemd-networkd[1320]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:25:53.759507 systemd-networkd[1320]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 00:25:53.761641 systemd-networkd[1320]: eth0: Link UP Dec 13 00:25:53.761937 systemd-networkd[1320]: eth0: Gained carrier Dec 13 00:25:53.761954 systemd-networkd[1320]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:25:53.769282 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 00:25:53.772580 systemd[1]: Reached target network.target - Network. Dec 13 00:25:53.781753 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 00:25:53.788574 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 13 00:25:53.792369 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 00:25:53.804794 systemd-networkd[1320]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 00:25:53.811724 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 00:25:53.830040 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 00:25:53.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.826602 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 00:25:53.837291 kernel: ACPI: button: Power Button [PWRF] Dec 13 00:25:53.841598 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 13 00:25:53.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.850254 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 00:25:53.853881 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 00:25:53.854202 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 00:25:54.008749 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:25:54.021467 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:25:54.022167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:54.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:54.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:54.034795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:25:54.082699 kernel: kvm_amd: TSC scaling supported Dec 13 00:25:54.082842 kernel: kvm_amd: Nested Virtualization enabled Dec 13 00:25:54.082886 kernel: kvm_amd: Nested Paging enabled Dec 13 00:25:54.082941 kernel: kvm_amd: LBR virtualization supported Dec 13 00:25:54.083008 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 00:25:54.083063 kernel: kvm_amd: Virtual GIF supported Dec 13 00:25:54.137537 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Dec 13 00:25:54.140477 kernel: loop5: detected capacity change from 0 to 229808 Dec 13 00:25:54.142507 kernel: EDAC MC: Ver: 3.0.0 Dec 13 00:25:54.157069 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:54.159487 kernel: loop6: detected capacity change from 0 to 375256 Dec 13 00:25:54.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:54.161552 kernel: loop6: p1 p2 p3 Dec 13 00:25:54.182938 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:25:54.183054 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 00:25:54.185220 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Dec 13 00:25:54.185374 kernel: device-mapper: ioctl: error adding target to table Dec 13 00:25:54.186716 (sd-merge)[1274]: device-mapper: reload ioctl on c81b0b335c4f741d8803812340292f37f57a6bdf618683fbcdb11178b8725544-verity (253:2) failed: Invalid argument Dec 13 00:25:54.190478 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:25:54.252479 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Dec 13 00:25:54.254741 (sd-merge)[1274]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 13 00:25:54.259055 (sd-merge)[1274]: Merged extensions into '/usr'. Dec 13 00:25:54.263717 systemd[1]: Reload requested from client PID 1256 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 00:25:54.263736 systemd[1]: Reloading... Dec 13 00:25:54.319480 zram_generator::config[1415]: No configuration found. Dec 13 00:25:54.596808 systemd[1]: Reloading finished in 332 ms. Dec 13 00:25:54.627637 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 00:25:54.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:54.657050 systemd[1]: Starting ensure-sysext.service... Dec 13 00:25:54.659713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 00:25:54.662000 audit: BPF prog-id=31 op=LOAD Dec 13 00:25:54.662000 audit: BPF prog-id=32 op=LOAD Dec 13 00:25:54.662000 audit: BPF prog-id=28 op=UNLOAD Dec 13 00:25:54.662000 audit: BPF prog-id=29 op=UNLOAD Dec 13 00:25:54.663000 audit: BPF prog-id=33 op=LOAD Dec 13 00:25:54.663000 audit: BPF prog-id=21 op=UNLOAD Dec 13 00:25:54.665000 audit: BPF prog-id=34 op=LOAD Dec 13 00:25:54.665000 audit: BPF prog-id=25 op=UNLOAD Dec 13 00:25:54.665000 audit: BPF prog-id=35 op=LOAD Dec 13 00:25:54.665000 audit: BPF prog-id=36 op=LOAD Dec 13 00:25:54.665000 audit: BPF prog-id=26 op=UNLOAD Dec 13 00:25:54.665000 audit: BPF prog-id=27 op=UNLOAD Dec 13 00:25:54.666000 audit: BPF prog-id=37 op=LOAD Dec 13 00:25:54.666000 audit: BPF prog-id=15 op=UNLOAD Dec 13 00:25:54.667000 audit: BPF prog-id=38 op=LOAD Dec 13 00:25:54.667000 audit: BPF prog-id=39 op=LOAD Dec 13 00:25:54.667000 audit: BPF prog-id=16 op=UNLOAD Dec 13 00:25:54.667000 audit: BPF prog-id=17 op=UNLOAD Dec 13 00:25:54.668000 audit: BPF prog-id=40 op=LOAD Dec 13 00:25:54.668000 audit: BPF prog-id=30 op=UNLOAD Dec 13 00:25:54.670000 audit: BPF prog-id=41 op=LOAD Dec 13 00:25:54.670000 audit: BPF prog-id=18 op=UNLOAD Dec 13 00:25:54.670000 audit: BPF prog-id=42 op=LOAD Dec 13 00:25:54.670000 audit: BPF prog-id=43 op=LOAD Dec 13 00:25:54.670000 audit: BPF prog-id=19 op=UNLOAD Dec 13 00:25:54.670000 audit: BPF prog-id=20 op=UNLOAD Dec 13 00:25:54.670000 audit: BPF prog-id=44 op=LOAD Dec 13 00:25:54.670000 audit: BPF prog-id=22 op=UNLOAD Dec 13 00:25:54.671000 audit: BPF prog-id=45 op=LOAD Dec 13 00:25:54.671000 audit: BPF prog-id=46 op=LOAD Dec 13 00:25:54.671000 audit: BPF prog-id=23 op=UNLOAD Dec 13 00:25:54.671000 audit: BPF prog-id=24 op=UNLOAD Dec 13 00:25:54.677987 systemd[1]: Reload requested from client PID 1451 ('systemctl') (unit ensure-sysext.service)... Dec 13 00:25:54.678002 systemd[1]: Reloading... Dec 13 00:25:54.679543 systemd-tmpfiles[1452]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 13 00:25:54.679608 systemd-tmpfiles[1452]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 13 00:25:54.680040 systemd-tmpfiles[1452]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 00:25:54.681504 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Dec 13 00:25:54.681609 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Dec 13 00:25:54.688565 systemd-tmpfiles[1452]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 00:25:54.688579 systemd-tmpfiles[1452]: Skipping /boot Dec 13 00:25:54.701132 systemd-tmpfiles[1452]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 00:25:54.701149 systemd-tmpfiles[1452]: Skipping /boot Dec 13 00:25:54.728471 zram_generator::config[1486]: No configuration found. Dec 13 00:25:54.979236 systemd[1]: Reloading finished in 300 ms. Dec 13 00:25:55.005276 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:25:55.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:55.010000 audit: BPF prog-id=47 op=LOAD Dec 13 00:25:55.010000 audit: BPF prog-id=33 op=UNLOAD Dec 13 00:25:55.012000 audit: BPF prog-id=48 op=LOAD Dec 13 00:25:55.012000 audit: BPF prog-id=34 op=UNLOAD Dec 13 00:25:55.012000 audit: BPF prog-id=49 op=LOAD Dec 13 00:25:55.012000 audit: BPF prog-id=50 op=LOAD Dec 13 00:25:55.012000 audit: BPF prog-id=35 op=UNLOAD Dec 13 00:25:55.012000 audit: BPF prog-id=36 op=UNLOAD Dec 13 00:25:55.013000 audit: BPF prog-id=51 op=LOAD Dec 13 00:25:55.013000 audit: BPF prog-id=41 op=UNLOAD Dec 13 00:25:55.013000 audit: BPF prog-id=52 op=LOAD Dec 13 00:25:55.013000 audit: BPF prog-id=53 op=LOAD Dec 13 00:25:55.013000 audit: BPF prog-id=42 op=UNLOAD Dec 13 00:25:55.013000 audit: BPF prog-id=43 op=UNLOAD Dec 13 00:25:55.014000 audit: BPF prog-id=54 op=LOAD Dec 13 00:25:55.014000 audit: BPF prog-id=55 op=LOAD Dec 13 00:25:55.014000 audit: BPF prog-id=31 op=UNLOAD Dec 13 00:25:55.014000 audit: BPF prog-id=32 op=UNLOAD Dec 13 00:25:55.014000 audit: BPF prog-id=56 op=LOAD Dec 13 00:25:55.014000 audit: BPF prog-id=44 op=UNLOAD Dec 13 00:25:55.015000 audit: BPF prog-id=57 op=LOAD Dec 13 00:25:55.036000 audit: BPF prog-id=58 op=LOAD Dec 13 00:25:55.036000 audit: BPF prog-id=45 op=UNLOAD Dec 13 00:25:55.036000 audit: BPF prog-id=46 op=UNLOAD Dec 13 00:25:55.038000 audit: BPF prog-id=59 op=LOAD Dec 13 00:25:55.038000 audit: BPF prog-id=40 op=UNLOAD Dec 13 00:25:55.039000 audit: BPF prog-id=60 op=LOAD Dec 13 00:25:55.039000 audit: BPF prog-id=37 op=UNLOAD Dec 13 00:25:55.040000 audit: BPF prog-id=61 op=LOAD Dec 13 00:25:55.040000 audit: BPF prog-id=62 op=LOAD Dec 13 00:25:55.040000 audit: BPF prog-id=38 op=UNLOAD Dec 13 00:25:55.040000 audit: BPF prog-id=39 op=UNLOAD Dec 13 00:25:55.052082 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 00:25:55.055235 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 00:25:55.070789 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 00:25:55.075760 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 00:25:55.079421 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 00:25:55.084313 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:55.084498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:25:55.088584 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:25:55.094172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:25:55.099236 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:25:55.102129 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:25:55.102135 systemd-networkd[1320]: eth0: Gained IPv6LL Dec 13 00:25:55.102370 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:25:55.102514 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:25:55.102668 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:55.107158 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:55.107357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:25:55.107944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:25:55.108569 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:25:55.108684 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:25:55.108797 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:55.111261 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 00:25:55.116053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:25:55.116288 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:25:55.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:55.116000 audit[1529]: SYSTEM_BOOT pid=1529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 00:25:55.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:55.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:55.119119 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:25:55.119840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:25:55.123085 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:25:55.123325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:25:55.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:55.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:55.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:55.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:55.143708 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 00:25:55.145773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:55.146087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:25:55.147697 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:25:55.150549 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 00:25:55.157000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 00:25:55.157000 audit[1558]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff791160a0 a2=420 a3=0 items=0 ppid=1524 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.157000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:25:55.158580 augenrules[1558]: No rules Dec 13 00:25:55.159584 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:25:55.164573 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:25:55.166686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:25:55.166962 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:25:55.167082 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:25:55.167222 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:55.170093 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 00:25:55.170586 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 00:25:55.173105 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 00:25:55.176007 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 00:25:55.178789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:25:55.179053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:25:55.181577 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 00:25:55.181812 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 00:25:55.184282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:25:55.184557 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:25:55.187313 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:25:55.187615 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:25:55.195522 systemd[1]: Finished ensure-sysext.service. Dec 13 00:25:55.197669 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 00:25:55.208857 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 00:25:55.208946 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 00:25:55.211546 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 00:25:55.213680 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 00:25:55.316679 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 00:25:56.817953 systemd-resolved[1292]: Clock change detected. Flushing caches. Dec 13 00:25:56.818057 systemd-timesyncd[1573]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 00:25:56.818099 systemd-timesyncd[1573]: Initial clock synchronization to Sat 2025-12-13 00:25:56.817896 UTC. Dec 13 00:25:56.818411 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 00:25:57.248013 ldconfig[1526]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 00:25:57.257241 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 00:25:57.261002 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 00:25:57.299003 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 00:25:57.301698 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 00:25:57.303919 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 00:25:57.306183 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 00:25:57.308397 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 13 00:25:57.310666 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 00:25:57.312890 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 00:25:57.315141 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 13 00:25:57.317514 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 13 00:25:57.319448 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 00:25:57.321693 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 00:25:57.321733 systemd[1]: Reached target paths.target - Path Units. Dec 13 00:25:57.323363 systemd[1]: Reached target timers.target - Timer Units. Dec 13 00:25:57.326043 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 00:25:57.330281 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 00:25:57.335775 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 13 00:25:57.338286 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 13 00:25:57.340619 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 13 00:25:57.346986 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 00:25:57.349243 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 13 00:25:57.352168 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 00:25:57.355164 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 00:25:57.357047 systemd[1]: Reached target basic.target - Basic System. Dec 13 00:25:57.358910 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 00:25:57.358990 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 00:25:57.360486 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 00:25:57.364472 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 00:25:57.368444 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 00:25:57.396326 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 00:25:57.401261 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 00:25:57.405924 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 00:25:57.407741 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 00:25:57.409299 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 13 00:25:57.425917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:57.427738 jq[1586]: false Dec 13 00:25:57.431229 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Refreshing passwd entry cache Dec 13 00:25:57.431476 oslogin_cache_refresh[1588]: Refreshing passwd entry cache Dec 13 00:25:57.449722 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 00:25:57.452529 extend-filesystems[1587]: Found /dev/vda6 Dec 13 00:25:57.452891 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 00:25:57.456294 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Failure getting users, quitting Dec 13 00:25:57.456282 oslogin_cache_refresh[1588]: Failure getting users, quitting Dec 13 00:25:57.456440 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 13 00:25:57.456440 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Refreshing group entry cache Dec 13 00:25:57.456334 oslogin_cache_refresh[1588]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 13 00:25:57.456443 oslogin_cache_refresh[1588]: Refreshing group entry cache Dec 13 00:25:57.457647 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 00:25:57.458494 extend-filesystems[1587]: Found /dev/vda9 Dec 13 00:25:57.461922 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 00:25:57.464091 extend-filesystems[1587]: Checking size of /dev/vda9 Dec 13 00:25:57.469038 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Failure getting groups, quitting Dec 13 00:25:57.469038 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 13 00:25:57.468666 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 00:25:57.467223 oslogin_cache_refresh[1588]: Failure getting groups, quitting Dec 13 00:25:57.467236 oslogin_cache_refresh[1588]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 13 00:25:57.481639 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 00:25:57.489457 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 00:25:57.490230 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 00:25:57.491841 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 00:25:57.496035 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 00:25:57.501890 extend-filesystems[1587]: Resized partition /dev/vda9 Dec 13 00:25:57.512228 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 00:25:57.552027 jq[1615]: true Dec 13 00:25:57.524418 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 00:25:57.524736 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 00:25:57.525101 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 13 00:25:57.525351 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 13 00:25:57.529678 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 00:25:57.530127 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 00:25:57.534339 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 00:25:57.539465 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 00:25:57.539853 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 00:25:57.560036 extend-filesystems[1633]: resize2fs 1.47.3 (8-Jul-2025) Dec 13 00:25:57.581125 jq[1628]: true Dec 13 00:25:57.593050 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 00:25:57.593413 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 00:25:57.608899 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 00:25:57.711457 tar[1626]: linux-amd64/LICENSE Dec 13 00:25:57.711457 tar[1626]: linux-amd64/helm Dec 13 00:25:57.712569 update_engine[1612]: I20251213 00:25:57.710108 1612 main.cc:92] Flatcar Update Engine starting Dec 13 00:25:57.712995 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 13 00:25:57.715778 systemd-logind[1607]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 00:25:57.715809 systemd-logind[1607]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 00:25:57.716891 systemd-logind[1607]: New seat seat0. Dec 13 00:25:57.718643 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 00:25:57.768023 sshd_keygen[1610]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 00:25:57.813599 dbus-daemon[1584]: [system] SELinux support is enabled Dec 13 00:25:57.813952 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 00:25:57.821191 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 00:25:57.821223 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 00:25:57.824323 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 00:25:57.824347 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 00:25:57.838255 dbus-daemon[1584]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 00:25:57.839258 update_engine[1612]: I20251213 00:25:57.838930 1612 update_check_scheduler.cc:74] Next update check in 3m14s Dec 13 00:25:57.855160 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 00:25:57.877121 systemd[1]: Started update-engine.service - Update Engine. Dec 13 00:25:57.925722 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 00:25:57.929580 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 00:25:57.941021 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 13 00:25:58.014529 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 00:25:58.014859 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 00:25:58.029485 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 00:25:58.049035 locksmithd[1677]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 00:25:58.061413 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 00:25:58.066517 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 00:25:58.073718 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 00:25:58.085563 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 00:25:58.158456 extend-filesystems[1633]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 00:25:58.158456 extend-filesystems[1633]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 00:25:58.158456 extend-filesystems[1633]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 13 00:25:58.167331 extend-filesystems[1587]: Resized filesystem in /dev/vda9 Dec 13 00:25:58.169798 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 00:25:58.170264 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 00:25:58.479229 tar[1626]: linux-amd64/README.md Dec 13 00:25:58.505696 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 00:25:58.563478 bash[1666]: Updated "/home/core/.ssh/authorized_keys" Dec 13 00:25:58.565715 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 00:25:58.569412 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 00:25:58.646721 containerd[1645]: time="2025-12-13T00:25:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 13 00:25:58.648867 containerd[1645]: time="2025-12-13T00:25:58.648479325Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 13 00:25:58.688608 containerd[1645]: time="2025-12-13T00:25:58.688501030Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.685µs" Dec 13 00:25:58.688608 containerd[1645]: time="2025-12-13T00:25:58.688566523Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 13 00:25:58.688842 containerd[1645]: time="2025-12-13T00:25:58.688655259Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 13 00:25:58.688842 containerd[1645]: time="2025-12-13T00:25:58.688674084Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 13 00:25:58.689108 containerd[1645]: time="2025-12-13T00:25:58.689062152Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 13 00:25:58.689108 containerd[1645]: time="2025-12-13T00:25:58.689093831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 00:25:58.689230 containerd[1645]: time="2025-12-13T00:25:58.689192216Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 00:25:58.689230 containerd[1645]: time="2025-12-13T00:25:58.689212083Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 00:25:58.689601 containerd[1645]: time="2025-12-13T00:25:58.689545208Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 00:25:58.689601 containerd[1645]: time="2025-12-13T00:25:58.689575946Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 00:25:58.689808 containerd[1645]: time="2025-12-13T00:25:58.689773236Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 00:25:58.689808 containerd[1645]: time="2025-12-13T00:25:58.689802470Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 13 00:25:58.690724 containerd[1645]: time="2025-12-13T00:25:58.690685306Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 13 00:25:58.690950 containerd[1645]: time="2025-12-13T00:25:58.690908244Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 13 00:25:58.692109 containerd[1645]: time="2025-12-13T00:25:58.691809214Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 00:25:58.692186 containerd[1645]: time="2025-12-13T00:25:58.692152928Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 00:25:58.692186 containerd[1645]: time="2025-12-13T00:25:58.692182133Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 13 00:25:58.693728 containerd[1645]: time="2025-12-13T00:25:58.693609029Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 13 00:25:58.696170 containerd[1645]: time="2025-12-13T00:25:58.696120349Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 13 00:25:58.696452 containerd[1645]: time="2025-12-13T00:25:58.696405293Z" level=info msg="metadata content store policy set" policy=shared Dec 13 00:25:58.706843 containerd[1645]: time="2025-12-13T00:25:58.706745505Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 13 00:25:58.707044 containerd[1645]: time="2025-12-13T00:25:58.706873334Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 00:25:58.707084 containerd[1645]: time="2025-12-13T00:25:58.707065144Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 00:25:58.707107 containerd[1645]: time="2025-12-13T00:25:58.707088678Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 13 00:25:58.707219 containerd[1645]: time="2025-12-13T00:25:58.707119376Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 13 00:25:58.707219 containerd[1645]: time="2025-12-13T00:25:58.707182544Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 13 00:25:58.707296 containerd[1645]: time="2025-12-13T00:25:58.707224413Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 13 00:25:58.707341 containerd[1645]: time="2025-12-13T00:25:58.707260631Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 13 00:25:58.707389 containerd[1645]: time="2025-12-13T00:25:58.707343566Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 13 00:25:58.707422 containerd[1645]: time="2025-12-13T00:25:58.707375336Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 13 00:25:58.707422 containerd[1645]: time="2025-12-13T00:25:58.707407917Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 13 00:25:58.707422 containerd[1645]: time="2025-12-13T00:25:58.707420080Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 13 00:25:58.707489 containerd[1645]: time="2025-12-13T00:25:58.707430589Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 13 00:25:58.707489 containerd[1645]: time="2025-12-13T00:25:58.707465365Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 13 00:25:58.707763 containerd[1645]: time="2025-12-13T00:25:58.707716015Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 13 00:25:58.707798 containerd[1645]: time="2025-12-13T00:25:58.707765277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 13 00:25:58.707825 containerd[1645]: time="2025-12-13T00:25:58.707803709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 13 00:25:58.707825 containerd[1645]: time="2025-12-13T00:25:58.707820571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 13 00:25:58.707893 containerd[1645]: time="2025-12-13T00:25:58.707831441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 13 00:25:58.707893 containerd[1645]: time="2025-12-13T00:25:58.707853523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 13 00:25:58.707893 containerd[1645]: time="2025-12-13T00:25:58.707872648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 13 00:25:58.707893 containerd[1645]: time="2025-12-13T00:25:58.707892025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 13 00:25:58.708022 containerd[1645]: time="2025-12-13T00:25:58.707931699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 13 00:25:58.708022 containerd[1645]: time="2025-12-13T00:25:58.707955524Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 13 00:25:58.708096 containerd[1645]: time="2025-12-13T00:25:58.708017710Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 13 00:25:58.708497 containerd[1645]: time="2025-12-13T00:25:58.708434402Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 13 00:25:58.708814 containerd[1645]: time="2025-12-13T00:25:58.708759762Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 13 00:25:58.708884 containerd[1645]: time="2025-12-13T00:25:58.708845403Z" level=info msg="Start snapshots syncer" Dec 13 00:25:58.709221 containerd[1645]: time="2025-12-13T00:25:58.708905796Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 13 00:25:58.709607 containerd[1645]: time="2025-12-13T00:25:58.709482678Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 13 00:25:58.709874 containerd[1645]: time="2025-12-13T00:25:58.709625125Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 13 00:25:58.711814 containerd[1645]: time="2025-12-13T00:25:58.711747485Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 13 00:25:58.711955 containerd[1645]: time="2025-12-13T00:25:58.711918115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 13 00:25:58.712004 containerd[1645]: time="2025-12-13T00:25:58.711962699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 13 00:25:58.712004 containerd[1645]: time="2025-12-13T00:25:58.711998115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 13 00:25:58.712052 containerd[1645]: time="2025-12-13T00:25:58.712012192Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 13 00:25:58.712052 containerd[1645]: time="2025-12-13T00:25:58.712032229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 13 00:25:58.712052 containerd[1645]: time="2025-12-13T00:25:58.712044903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 13 00:25:58.712144 containerd[1645]: time="2025-12-13T00:25:58.712080059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 13 00:25:58.712144 containerd[1645]: time="2025-12-13T00:25:58.712093614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 13 00:25:58.712144 containerd[1645]: time="2025-12-13T00:25:58.712107530Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 13 00:25:58.712505 containerd[1645]: time="2025-12-13T00:25:58.712472234Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 00:25:58.712555 containerd[1645]: time="2025-12-13T00:25:58.712501459Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 00:25:58.712555 containerd[1645]: time="2025-12-13T00:25:58.712514403Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 00:25:58.712555 containerd[1645]: time="2025-12-13T00:25:58.712526516Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 00:25:58.712555 containerd[1645]: time="2025-12-13T00:25:58.712536926Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 13 00:25:58.712649 containerd[1645]: time="2025-12-13T00:25:58.712549159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 13 00:25:58.712649 containerd[1645]: time="2025-12-13T00:25:58.712610514Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 13 00:25:58.712692 containerd[1645]: time="2025-12-13T00:25:58.712650378Z" level=info msg="runtime interface created" Dec 13 00:25:58.712692 containerd[1645]: time="2025-12-13T00:25:58.712658033Z" level=info msg="created NRI interface" Dec 13 00:25:58.712692 containerd[1645]: time="2025-12-13T00:25:58.712667701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 13 00:25:58.712692 containerd[1645]: time="2025-12-13T00:25:58.712684362Z" level=info msg="Connect containerd service" Dec 13 00:25:58.712797 containerd[1645]: time="2025-12-13T00:25:58.712711042Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 00:25:58.717866 containerd[1645]: time="2025-12-13T00:25:58.717791490Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 00:25:59.012003 containerd[1645]: time="2025-12-13T00:25:59.011929003Z" level=info msg="Start subscribing containerd event" Dec 13 00:25:59.012185 containerd[1645]: time="2025-12-13T00:25:59.012006889Z" level=info msg="Start recovering state" Dec 13 00:25:59.013661 containerd[1645]: time="2025-12-13T00:25:59.013615816Z" level=info msg="Start event monitor" Dec 13 00:25:59.013786 containerd[1645]: time="2025-12-13T00:25:59.013632147Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 00:25:59.013814 containerd[1645]: time="2025-12-13T00:25:59.013790985Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 00:25:59.013840 containerd[1645]: time="2025-12-13T00:25:59.013680347Z" level=info msg="Start cni network conf syncer for default" Dec 13 00:25:59.013867 containerd[1645]: time="2025-12-13T00:25:59.013857700Z" level=info msg="Start streaming server" Dec 13 00:25:59.013907 containerd[1645]: time="2025-12-13T00:25:59.013886844Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 13 00:25:59.013933 containerd[1645]: time="2025-12-13T00:25:59.013911120Z" level=info msg="runtime interface starting up..." Dec 13 00:25:59.013933 containerd[1645]: time="2025-12-13T00:25:59.013923653Z" level=info msg="starting plugins..." Dec 13 00:25:59.014084 containerd[1645]: time="2025-12-13T00:25:59.013950524Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 13 00:25:59.014679 containerd[1645]: time="2025-12-13T00:25:59.014653061Z" level=info msg="containerd successfully booted in 0.368408s" Dec 13 00:25:59.014920 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 00:25:59.908493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:59.912082 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 00:25:59.912449 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:25:59.915305 systemd[1]: Startup finished in 2.993s (kernel) + 7.342s (initrd) + 7.379s (userspace) = 17.715s. Dec 13 00:26:00.747740 kubelet[1721]: E1213 00:26:00.747639 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:26:00.752699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:26:00.752898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:26:00.753410 systemd[1]: kubelet.service: Consumed 2.121s CPU time, 267.6M memory peak. Dec 13 00:26:07.091321 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 00:26:07.092664 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:52244.service - OpenSSH per-connection server daemon (10.0.0.1:52244). Dec 13 00:26:07.184137 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 52244 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:26:07.186522 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:26:07.194166 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 00:26:07.195305 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 00:26:07.199620 systemd-logind[1607]: New session 1 of user core. Dec 13 00:26:07.224415 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 00:26:07.227841 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 00:26:07.248680 (systemd)[1740]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:26:07.252474 systemd-logind[1607]: New session 2 of user core. Dec 13 00:26:07.443661 systemd[1740]: Queued start job for default target default.target. Dec 13 00:26:07.465350 systemd[1740]: Created slice app.slice - User Application Slice. Dec 13 00:26:07.465385 systemd[1740]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 13 00:26:07.465402 systemd[1740]: Reached target paths.target - Paths. Dec 13 00:26:07.465463 systemd[1740]: Reached target timers.target - Timers. Dec 13 00:26:07.467036 systemd[1740]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 00:26:07.468010 systemd[1740]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 13 00:26:07.478714 systemd[1740]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 00:26:07.478846 systemd[1740]: Reached target sockets.target - Sockets. Dec 13 00:26:07.481027 systemd[1740]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 13 00:26:07.481177 systemd[1740]: Reached target basic.target - Basic System. Dec 13 00:26:07.481248 systemd[1740]: Reached target default.target - Main User Target. Dec 13 00:26:07.481295 systemd[1740]: Startup finished in 221ms. Dec 13 00:26:07.481580 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 00:26:07.494230 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 00:26:07.525283 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:52256.service - OpenSSH per-connection server daemon (10.0.0.1:52256). Dec 13 00:26:07.589034 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 52256 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:26:07.591013 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:26:07.596369 systemd-logind[1607]: New session 3 of user core. Dec 13 00:26:07.608349 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 00:26:07.624055 sshd[1758]: Connection closed by 10.0.0.1 port 52256 Dec 13 00:26:07.624406 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Dec 13 00:26:07.635150 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:52256.service: Deactivated successfully. Dec 13 00:26:07.637069 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 00:26:07.637897 systemd-logind[1607]: Session 3 logged out. Waiting for processes to exit. Dec 13 00:26:07.640804 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:52258.service - OpenSSH per-connection server daemon (10.0.0.1:52258). Dec 13 00:26:07.641594 systemd-logind[1607]: Removed session 3. Dec 13 00:26:07.701936 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 52258 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:26:07.703702 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:26:07.708862 systemd-logind[1607]: New session 4 of user core. Dec 13 00:26:07.722229 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 00:26:07.732342 sshd[1769]: Connection closed by 10.0.0.1 port 52258 Dec 13 00:26:07.732680 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Dec 13 00:26:07.750168 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:52258.service: Deactivated successfully. Dec 13 00:26:07.752093 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 00:26:07.753025 systemd-logind[1607]: Session 4 logged out. Waiting for processes to exit. Dec 13 00:26:07.756099 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:52260.service - OpenSSH per-connection server daemon (10.0.0.1:52260). Dec 13 00:26:07.756833 systemd-logind[1607]: Removed session 4. Dec 13 00:26:07.814083 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 52260 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:26:07.816254 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:26:07.821463 systemd-logind[1607]: New session 5 of user core. Dec 13 00:26:07.831294 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 00:26:07.846823 sshd[1780]: Connection closed by 10.0.0.1 port 52260 Dec 13 00:26:07.847236 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Dec 13 00:26:07.861667 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:52260.service: Deactivated successfully. Dec 13 00:26:07.863680 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 00:26:07.864624 systemd-logind[1607]: Session 5 logged out. Waiting for processes to exit. Dec 13 00:26:07.867793 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:52276.service - OpenSSH per-connection server daemon (10.0.0.1:52276). Dec 13 00:26:07.868582 systemd-logind[1607]: Removed session 5. Dec 13 00:26:07.929564 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 52276 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:26:07.931638 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:26:07.936473 systemd-logind[1607]: New session 6 of user core. Dec 13 00:26:07.950323 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 00:26:07.973384 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 00:26:07.973723 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:26:08.924360 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 00:26:08.945281 (dockerd)[1813]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 00:26:09.775373 dockerd[1813]: time="2025-12-13T00:26:09.775257766Z" level=info msg="Starting up" Dec 13 00:26:09.784831 dockerd[1813]: time="2025-12-13T00:26:09.784801524Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 13 00:26:09.810577 dockerd[1813]: time="2025-12-13T00:26:09.810505168Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 13 00:26:10.036782 dockerd[1813]: time="2025-12-13T00:26:10.036662814Z" level=info msg="Loading containers: start." Dec 13 00:26:10.056055 kernel: Initializing XFRM netlink socket Dec 13 00:26:10.404379 systemd-networkd[1320]: docker0: Link UP Dec 13 00:26:10.582300 dockerd[1813]: time="2025-12-13T00:26:10.582168722Z" level=info msg="Loading containers: done." Dec 13 00:26:10.611867 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck515662587-merged.mount: Deactivated successfully. Dec 13 00:26:10.616845 dockerd[1813]: time="2025-12-13T00:26:10.616785381Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 00:26:10.617054 dockerd[1813]: time="2025-12-13T00:26:10.616934971Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 13 00:26:10.617151 dockerd[1813]: time="2025-12-13T00:26:10.617128294Z" level=info msg="Initializing buildkit" Dec 13 00:26:10.669519 dockerd[1813]: time="2025-12-13T00:26:10.669348305Z" level=info msg="Completed buildkit initialization" Dec 13 00:26:10.679327 dockerd[1813]: time="2025-12-13T00:26:10.679204309Z" level=info msg="Daemon has completed initialization" Dec 13 00:26:10.679432 dockerd[1813]: time="2025-12-13T00:26:10.679311369Z" level=info msg="API listen on /run/docker.sock" Dec 13 00:26:10.679637 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 00:26:11.003440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 00:26:11.005278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:26:11.405728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:26:11.430516 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:26:11.489588 kubelet[2039]: E1213 00:26:11.489511 2039 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:26:11.496466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:26:11.496675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:26:11.497171 systemd[1]: kubelet.service: Consumed 404ms CPU time, 109.9M memory peak. Dec 13 00:26:11.693151 containerd[1645]: time="2025-12-13T00:26:11.692990990Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 13 00:26:12.797811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182836713.mount: Deactivated successfully. Dec 13 00:26:13.877340 containerd[1645]: time="2025-12-13T00:26:13.877269061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:13.878203 containerd[1645]: time="2025-12-13T00:26:13.878166565Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28477660" Dec 13 00:26:13.879453 containerd[1645]: time="2025-12-13T00:26:13.879415627Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:13.882570 containerd[1645]: time="2025-12-13T00:26:13.882518015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:13.883635 containerd[1645]: time="2025-12-13T00:26:13.883603871Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.190549342s" Dec 13 00:26:13.883685 containerd[1645]: time="2025-12-13T00:26:13.883645490Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 13 00:26:13.884733 containerd[1645]: time="2025-12-13T00:26:13.884662787Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 13 00:26:15.298958 containerd[1645]: time="2025-12-13T00:26:15.298875461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:15.299944 containerd[1645]: time="2025-12-13T00:26:15.299886788Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 13 00:26:15.301543 containerd[1645]: time="2025-12-13T00:26:15.301483993Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:15.305671 containerd[1645]: time="2025-12-13T00:26:15.305634507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:15.307268 containerd[1645]: time="2025-12-13T00:26:15.307203699Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.42246523s" Dec 13 00:26:15.307324 containerd[1645]: time="2025-12-13T00:26:15.307275744Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 13 00:26:15.307850 containerd[1645]: time="2025-12-13T00:26:15.307818322Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 13 00:26:18.472179 containerd[1645]: time="2025-12-13T00:26:18.472044746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:18.553202 containerd[1645]: time="2025-12-13T00:26:18.553104011Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 13 00:26:18.566132 containerd[1645]: time="2025-12-13T00:26:18.565997421Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:18.757440 containerd[1645]: time="2025-12-13T00:26:18.757205785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:18.758684 containerd[1645]: time="2025-12-13T00:26:18.758620768Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 3.450768342s" Dec 13 00:26:18.758684 containerd[1645]: time="2025-12-13T00:26:18.758675311Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 13 00:26:18.759299 containerd[1645]: time="2025-12-13T00:26:18.759262622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 13 00:26:20.448609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1698702623.mount: Deactivated successfully. Dec 13 00:26:21.191591 containerd[1645]: time="2025-12-13T00:26:21.191498973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:21.192562 containerd[1645]: time="2025-12-13T00:26:21.192526230Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=20340589" Dec 13 00:26:21.194000 containerd[1645]: time="2025-12-13T00:26:21.193948257Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:21.196884 containerd[1645]: time="2025-12-13T00:26:21.196830312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:21.197883 containerd[1645]: time="2025-12-13T00:26:21.197820539Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.438521659s" Dec 13 00:26:21.197883 containerd[1645]: time="2025-12-13T00:26:21.197859963Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 13 00:26:21.198689 containerd[1645]: time="2025-12-13T00:26:21.198614578Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 13 00:26:21.641496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 00:26:21.643256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:26:21.862255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:26:21.867395 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:26:22.030993 kubelet[2133]: E1213 00:26:22.028497 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:26:22.032853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:26:22.033082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:26:22.033506 systemd[1]: kubelet.service: Consumed 224ms CPU time, 111.2M memory peak. Dec 13 00:26:22.255749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539967498.mount: Deactivated successfully. Dec 13 00:26:23.421853 containerd[1645]: time="2025-12-13T00:26:23.421755581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:23.423212 containerd[1645]: time="2025-12-13T00:26:23.423141650Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20131029" Dec 13 00:26:23.427121 containerd[1645]: time="2025-12-13T00:26:23.427067663Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:23.430864 containerd[1645]: time="2025-12-13T00:26:23.430810662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:23.431989 containerd[1645]: time="2025-12-13T00:26:23.431928388Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.233267694s" Dec 13 00:26:23.432108 containerd[1645]: time="2025-12-13T00:26:23.431990054Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 13 00:26:23.432587 containerd[1645]: time="2025-12-13T00:26:23.432486004Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 00:26:24.074075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount843655601.mount: Deactivated successfully. Dec 13 00:26:24.081146 containerd[1645]: time="2025-12-13T00:26:24.080864154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:26:24.082117 containerd[1645]: time="2025-12-13T00:26:24.082069926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 13 00:26:24.083897 containerd[1645]: time="2025-12-13T00:26:24.083817503Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:26:24.085725 containerd[1645]: time="2025-12-13T00:26:24.085661501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:26:24.086223 containerd[1645]: time="2025-12-13T00:26:24.086194671Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 653.641652ms" Dec 13 00:26:24.086275 containerd[1645]: time="2025-12-13T00:26:24.086224066Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 00:26:24.086867 containerd[1645]: time="2025-12-13T00:26:24.086832988Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 13 00:26:27.428289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548217737.mount: Deactivated successfully. Dec 13 00:26:32.141569 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 00:26:32.144051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:26:33.684200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:26:33.690458 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:26:33.840909 kubelet[2264]: E1213 00:26:33.840738 2264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:26:33.841541 containerd[1645]: time="2025-12-13T00:26:33.841033601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:33.841995 containerd[1645]: time="2025-12-13T00:26:33.841900828Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58237840" Dec 13 00:26:33.844384 containerd[1645]: time="2025-12-13T00:26:33.844302839Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:33.847322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:26:33.847594 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:26:33.848072 systemd[1]: kubelet.service: Consumed 398ms CPU time, 110M memory peak. Dec 13 00:26:33.848194 containerd[1645]: time="2025-12-13T00:26:33.848169099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:33.849998 containerd[1645]: time="2025-12-13T00:26:33.849938068Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 9.763077318s" Dec 13 00:26:33.850058 containerd[1645]: time="2025-12-13T00:26:33.850002248Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 13 00:26:38.130378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:26:38.130604 systemd[1]: kubelet.service: Consumed 398ms CPU time, 110M memory peak. Dec 13 00:26:38.133686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:26:38.165357 systemd[1]: Reload requested from client PID 2303 ('systemctl') (unit session-6.scope)... Dec 13 00:26:38.165378 systemd[1]: Reloading... Dec 13 00:26:38.279040 zram_generator::config[2352]: No configuration found. Dec 13 00:26:38.786731 systemd[1]: Reloading finished in 620 ms. Dec 13 00:26:38.866823 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 00:26:38.866925 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 00:26:38.867285 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:26:38.867332 systemd[1]: kubelet.service: Consumed 173ms CPU time, 98.5M memory peak. Dec 13 00:26:38.868915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:26:41.219199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:26:41.235476 (kubelet)[2397]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 00:26:41.285295 kubelet[2397]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:26:41.285295 kubelet[2397]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 00:26:41.285295 kubelet[2397]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:26:41.285295 kubelet[2397]: I1213 00:26:41.285149 2397 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 00:26:41.441966 kubelet[2397]: I1213 00:26:41.441885 2397 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 13 00:26:41.441966 kubelet[2397]: I1213 00:26:41.441936 2397 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 00:26:41.442462 kubelet[2397]: I1213 00:26:41.442415 2397 server.go:956] "Client rotation is on, will bootstrap in background" Dec 13 00:26:41.474050 kubelet[2397]: E1213 00:26:41.473909 2397 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 13 00:26:41.474050 kubelet[2397]: I1213 00:26:41.474024 2397 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 00:26:41.485692 kubelet[2397]: I1213 00:26:41.485643 2397 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 00:26:41.499713 kubelet[2397]: I1213 00:26:41.499243 2397 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 00:26:41.499865 kubelet[2397]: I1213 00:26:41.499702 2397 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 00:26:41.500329 kubelet[2397]: I1213 00:26:41.499791 2397 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 00:26:41.500503 kubelet[2397]: I1213 00:26:41.500329 2397 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 00:26:41.500503 kubelet[2397]: I1213 00:26:41.500348 2397 container_manager_linux.go:303] "Creating device plugin manager" Dec 13 00:26:41.500564 kubelet[2397]: I1213 00:26:41.500557 2397 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:26:41.502430 kubelet[2397]: I1213 00:26:41.502390 2397 kubelet.go:480] "Attempting to sync node with API server" Dec 13 00:26:41.502430 kubelet[2397]: I1213 00:26:41.502412 2397 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 00:26:41.502517 kubelet[2397]: I1213 00:26:41.502454 2397 kubelet.go:386] "Adding apiserver pod source" Dec 13 00:26:41.502517 kubelet[2397]: I1213 00:26:41.502479 2397 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 00:26:41.510815 kubelet[2397]: I1213 00:26:41.510770 2397 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 00:26:41.511313 kubelet[2397]: E1213 00:26:41.511263 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 13 00:26:41.511808 kubelet[2397]: I1213 00:26:41.511769 2397 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 13 00:26:41.512298 kubelet[2397]: E1213 00:26:41.512252 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 13 00:26:41.512669 kubelet[2397]: W1213 00:26:41.512643 2397 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 00:26:41.515835 kubelet[2397]: I1213 00:26:41.515807 2397 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 13 00:26:41.515895 kubelet[2397]: I1213 00:26:41.515866 2397 server.go:1289] "Started kubelet" Dec 13 00:26:41.516188 kubelet[2397]: I1213 00:26:41.516096 2397 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 00:26:41.517857 kubelet[2397]: I1213 00:26:41.517820 2397 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 00:26:41.517924 kubelet[2397]: I1213 00:26:41.517832 2397 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 00:26:41.518444 kubelet[2397]: I1213 00:26:41.518045 2397 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 00:26:41.521290 kubelet[2397]: I1213 00:26:41.521220 2397 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 00:26:41.521915 kubelet[2397]: E1213 00:26:41.521868 2397 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:26:41.522262 kubelet[2397]: E1213 00:26:41.522227 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="200ms" Dec 13 00:26:41.522499 kubelet[2397]: I1213 00:26:41.522465 2397 server.go:317] "Adding debug handlers to kubelet server" Dec 13 00:26:41.522910 kubelet[2397]: I1213 00:26:41.522872 2397 factory.go:223] Registration of the systemd container factory successfully Dec 13 00:26:41.523067 kubelet[2397]: I1213 00:26:41.523043 2397 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 00:26:41.523656 kubelet[2397]: I1213 00:26:41.523623 2397 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 13 00:26:41.523896 kubelet[2397]: I1213 00:26:41.523817 2397 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 13 00:26:41.523896 kubelet[2397]: I1213 00:26:41.523875 2397 reconciler.go:26] "Reconciler: start to sync state" Dec 13 00:26:41.524416 kubelet[2397]: E1213 00:26:41.524385 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 13 00:26:41.524525 kubelet[2397]: E1213 00:26:41.520940 2397 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.113:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.113:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18809ec006e52d2c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 00:26:41.515826476 +0000 UTC m=+0.275414331,LastTimestamp:2025-12-13 00:26:41.515826476 +0000 UTC m=+0.275414331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 00:26:41.525102 kubelet[2397]: E1213 00:26:41.525063 2397 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 00:26:41.525239 kubelet[2397]: I1213 00:26:41.525212 2397 factory.go:223] Registration of the containerd container factory successfully Dec 13 00:26:41.528214 kubelet[2397]: I1213 00:26:41.528065 2397 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 13 00:26:41.556066 kubelet[2397]: I1213 00:26:41.555941 2397 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 13 00:26:41.556066 kubelet[2397]: I1213 00:26:41.556052 2397 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 13 00:26:41.556219 kubelet[2397]: I1213 00:26:41.556113 2397 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 00:26:41.556219 kubelet[2397]: I1213 00:26:41.556133 2397 kubelet.go:2436] "Starting kubelet main sync loop" Dec 13 00:26:41.556264 kubelet[2397]: E1213 00:26:41.556207 2397 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 00:26:41.556764 kubelet[2397]: E1213 00:26:41.556735 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 13 00:26:41.562426 kubelet[2397]: I1213 00:26:41.562398 2397 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 00:26:41.562426 kubelet[2397]: I1213 00:26:41.562419 2397 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 00:26:41.562537 kubelet[2397]: I1213 00:26:41.562442 2397 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:26:41.616135 kubelet[2397]: I1213 00:26:41.616053 2397 policy_none.go:49] "None policy: Start" Dec 13 00:26:41.616135 kubelet[2397]: I1213 00:26:41.616115 2397 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 13 00:26:41.616135 kubelet[2397]: I1213 00:26:41.616140 2397 state_mem.go:35] "Initializing new in-memory state store" Dec 13 00:26:41.622148 kubelet[2397]: E1213 00:26:41.622061 2397 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:26:41.630359 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 00:26:41.656350 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 00:26:41.656916 kubelet[2397]: E1213 00:26:41.656487 2397 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 00:26:41.659826 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 00:26:41.670693 kubelet[2397]: E1213 00:26:41.670628 2397 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 13 00:26:41.671055 kubelet[2397]: I1213 00:26:41.671026 2397 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 00:26:41.671154 kubelet[2397]: I1213 00:26:41.671067 2397 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 00:26:41.671654 kubelet[2397]: I1213 00:26:41.671630 2397 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 00:26:41.674639 kubelet[2397]: E1213 00:26:41.674577 2397 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 00:26:41.674743 kubelet[2397]: E1213 00:26:41.674688 2397 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 00:26:41.722818 kubelet[2397]: E1213 00:26:41.722771 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="400ms" Dec 13 00:26:41.773618 kubelet[2397]: I1213 00:26:41.773486 2397 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:26:41.773904 kubelet[2397]: E1213 00:26:41.773875 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Dec 13 00:26:41.925885 kubelet[2397]: I1213 00:26:41.925806 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa74f357f608b98e008ea2200b405bc6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa74f357f608b98e008ea2200b405bc6\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:41.925885 kubelet[2397]: I1213 00:26:41.925863 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa74f357f608b98e008ea2200b405bc6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa74f357f608b98e008ea2200b405bc6\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:41.925885 kubelet[2397]: I1213 00:26:41.925893 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa74f357f608b98e008ea2200b405bc6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa74f357f608b98e008ea2200b405bc6\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:41.975875 kubelet[2397]: I1213 00:26:41.975824 2397 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:26:41.976396 kubelet[2397]: E1213 00:26:41.976339 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Dec 13 00:26:42.017908 systemd[1]: Created slice kubepods-burstable-podaa74f357f608b98e008ea2200b405bc6.slice - libcontainer container kubepods-burstable-podaa74f357f608b98e008ea2200b405bc6.slice. Dec 13 00:26:42.026566 kubelet[2397]: I1213 00:26:42.026444 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:42.026566 kubelet[2397]: I1213 00:26:42.026480 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:42.026566 kubelet[2397]: I1213 00:26:42.026508 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:42.026566 kubelet[2397]: I1213 00:26:42.026525 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:42.026780 kubelet[2397]: I1213 00:26:42.026671 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:42.034329 kubelet[2397]: E1213 00:26:42.034279 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:26:42.034698 kubelet[2397]: E1213 00:26:42.034656 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:42.035429 containerd[1645]: time="2025-12-13T00:26:42.035380833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa74f357f608b98e008ea2200b405bc6,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:42.123301 kubelet[2397]: E1213 00:26:42.123256 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="800ms" Dec 13 00:26:42.123282 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 13 00:26:42.125400 kubelet[2397]: E1213 00:26:42.125357 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:26:42.127569 kubelet[2397]: I1213 00:26:42.127517 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:42.286692 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 13 00:26:42.288695 kubelet[2397]: E1213 00:26:42.288657 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:26:42.289184 kubelet[2397]: E1213 00:26:42.289080 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:42.289648 containerd[1645]: time="2025-12-13T00:26:42.289607078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:42.331743 kubelet[2397]: E1213 00:26:42.331700 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 13 00:26:42.365000 kubelet[2397]: E1213 00:26:42.364916 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 13 00:26:42.379033 kubelet[2397]: I1213 00:26:42.378939 2397 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:26:42.379397 kubelet[2397]: E1213 00:26:42.379351 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Dec 13 00:26:42.426037 kubelet[2397]: E1213 00:26:42.425949 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:42.428172 containerd[1645]: time="2025-12-13T00:26:42.428100992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:42.462003 containerd[1645]: time="2025-12-13T00:26:42.461860518Z" level=info msg="connecting to shim ff96a83763aa027f912aab08d62ded2b5d31f64bb63ece9637dd2a97a37f8dbb" address="unix:///run/containerd/s/08b0adb4ec1474a50e4f72302a8a940b31241230f6d0118d8c2c0e6cc860433d" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:42.480553 containerd[1645]: time="2025-12-13T00:26:42.480496480Z" level=info msg="connecting to shim 8c8731b49e7a52bcb310c06d6146d642692549b829cdef5d2b9a5267e0b629e8" address="unix:///run/containerd/s/1d4359f4bbcf6dda881edbc7a021df279cfef46c309f68e4a0fd0c8bfd696839" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:42.486912 containerd[1645]: time="2025-12-13T00:26:42.486850869Z" level=info msg="connecting to shim 064e05555a82470cbf83c9bc3213b75479c819a205ad49d26c88fb0934e63adb" address="unix:///run/containerd/s/a0f0894fdf138368803ada4d4a256d01b981b8a8be7fb6f659d3110548730874" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:42.501243 systemd[1]: Started cri-containerd-ff96a83763aa027f912aab08d62ded2b5d31f64bb63ece9637dd2a97a37f8dbb.scope - libcontainer container ff96a83763aa027f912aab08d62ded2b5d31f64bb63ece9637dd2a97a37f8dbb. Dec 13 00:26:42.509097 systemd[1]: Started cri-containerd-8c8731b49e7a52bcb310c06d6146d642692549b829cdef5d2b9a5267e0b629e8.scope - libcontainer container 8c8731b49e7a52bcb310c06d6146d642692549b829cdef5d2b9a5267e0b629e8. Dec 13 00:26:42.517381 systemd[1]: Started cri-containerd-064e05555a82470cbf83c9bc3213b75479c819a205ad49d26c88fb0934e63adb.scope - libcontainer container 064e05555a82470cbf83c9bc3213b75479c819a205ad49d26c88fb0934e63adb. Dec 13 00:26:42.580206 containerd[1645]: time="2025-12-13T00:26:42.580009987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa74f357f608b98e008ea2200b405bc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff96a83763aa027f912aab08d62ded2b5d31f64bb63ece9637dd2a97a37f8dbb\"" Dec 13 00:26:42.584825 kubelet[2397]: E1213 00:26:42.584711 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:42.589467 containerd[1645]: time="2025-12-13T00:26:42.589355509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c8731b49e7a52bcb310c06d6146d642692549b829cdef5d2b9a5267e0b629e8\"" Dec 13 00:26:42.590189 kubelet[2397]: E1213 00:26:42.590170 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:42.591725 containerd[1645]: time="2025-12-13T00:26:42.591672961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"064e05555a82470cbf83c9bc3213b75479c819a205ad49d26c88fb0934e63adb\"" Dec 13 00:26:42.592292 kubelet[2397]: E1213 00:26:42.592249 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:42.597075 containerd[1645]: time="2025-12-13T00:26:42.597015139Z" level=info msg="CreateContainer within sandbox \"ff96a83763aa027f912aab08d62ded2b5d31f64bb63ece9637dd2a97a37f8dbb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 00:26:42.599650 containerd[1645]: time="2025-12-13T00:26:42.599586055Z" level=info msg="CreateContainer within sandbox \"8c8731b49e7a52bcb310c06d6146d642692549b829cdef5d2b9a5267e0b629e8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 00:26:42.601804 containerd[1645]: time="2025-12-13T00:26:42.601779908Z" level=info msg="CreateContainer within sandbox \"064e05555a82470cbf83c9bc3213b75479c819a205ad49d26c88fb0934e63adb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 00:26:42.608661 containerd[1645]: time="2025-12-13T00:26:42.608617781Z" level=info msg="Container 824b51587e6f8cce842de8362688815c15c9b8bdb59484e7bfada7d5fed33955: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:42.613457 containerd[1645]: time="2025-12-13T00:26:42.613425389Z" level=info msg="Container f4c81522b6bfc4553e5dde6b0d264b7208477b5eb00d7b194f196a8bf88a280e: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:42.617878 kubelet[2397]: E1213 00:26:42.617845 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 13 00:26:42.622507 containerd[1645]: time="2025-12-13T00:26:42.622463086Z" level=info msg="CreateContainer within sandbox \"ff96a83763aa027f912aab08d62ded2b5d31f64bb63ece9637dd2a97a37f8dbb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"824b51587e6f8cce842de8362688815c15c9b8bdb59484e7bfada7d5fed33955\"" Dec 13 00:26:42.623153 containerd[1645]: time="2025-12-13T00:26:42.623097652Z" level=info msg="StartContainer for \"824b51587e6f8cce842de8362688815c15c9b8bdb59484e7bfada7d5fed33955\"" Dec 13 00:26:42.624417 containerd[1645]: time="2025-12-13T00:26:42.624368889Z" level=info msg="connecting to shim 824b51587e6f8cce842de8362688815c15c9b8bdb59484e7bfada7d5fed33955" address="unix:///run/containerd/s/08b0adb4ec1474a50e4f72302a8a940b31241230f6d0118d8c2c0e6cc860433d" protocol=ttrpc version=3 Dec 13 00:26:42.628029 containerd[1645]: time="2025-12-13T00:26:42.627993666Z" level=info msg="Container 3fd0c4079110fc426a571633cac6a60577e33293c1c32996dfe0e3737446c9f6: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:42.632611 containerd[1645]: time="2025-12-13T00:26:42.632507916Z" level=info msg="CreateContainer within sandbox \"8c8731b49e7a52bcb310c06d6146d642692549b829cdef5d2b9a5267e0b629e8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f4c81522b6bfc4553e5dde6b0d264b7208477b5eb00d7b194f196a8bf88a280e\"" Dec 13 00:26:42.633142 containerd[1645]: time="2025-12-13T00:26:42.633112285Z" level=info msg="StartContainer for \"f4c81522b6bfc4553e5dde6b0d264b7208477b5eb00d7b194f196a8bf88a280e\"" Dec 13 00:26:42.635534 containerd[1645]: time="2025-12-13T00:26:42.634931958Z" level=info msg="connecting to shim f4c81522b6bfc4553e5dde6b0d264b7208477b5eb00d7b194f196a8bf88a280e" address="unix:///run/containerd/s/1d4359f4bbcf6dda881edbc7a021df279cfef46c309f68e4a0fd0c8bfd696839" protocol=ttrpc version=3 Dec 13 00:26:42.638525 containerd[1645]: time="2025-12-13T00:26:42.638486403Z" level=info msg="CreateContainer within sandbox \"064e05555a82470cbf83c9bc3213b75479c819a205ad49d26c88fb0934e63adb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3fd0c4079110fc426a571633cac6a60577e33293c1c32996dfe0e3737446c9f6\"" Dec 13 00:26:42.638997 containerd[1645]: time="2025-12-13T00:26:42.638954077Z" level=info msg="StartContainer for \"3fd0c4079110fc426a571633cac6a60577e33293c1c32996dfe0e3737446c9f6\"" Dec 13 00:26:42.640007 containerd[1645]: time="2025-12-13T00:26:42.639981768Z" level=info msg="connecting to shim 3fd0c4079110fc426a571633cac6a60577e33293c1c32996dfe0e3737446c9f6" address="unix:///run/containerd/s/a0f0894fdf138368803ada4d4a256d01b981b8a8be7fb6f659d3110548730874" protocol=ttrpc version=3 Dec 13 00:26:42.646283 systemd[1]: Started cri-containerd-824b51587e6f8cce842de8362688815c15c9b8bdb59484e7bfada7d5fed33955.scope - libcontainer container 824b51587e6f8cce842de8362688815c15c9b8bdb59484e7bfada7d5fed33955. Dec 13 00:26:42.656404 systemd[1]: Started cri-containerd-f4c81522b6bfc4553e5dde6b0d264b7208477b5eb00d7b194f196a8bf88a280e.scope - libcontainer container f4c81522b6bfc4553e5dde6b0d264b7208477b5eb00d7b194f196a8bf88a280e. Dec 13 00:26:42.670228 systemd[1]: Started cri-containerd-3fd0c4079110fc426a571633cac6a60577e33293c1c32996dfe0e3737446c9f6.scope - libcontainer container 3fd0c4079110fc426a571633cac6a60577e33293c1c32996dfe0e3737446c9f6. Dec 13 00:26:42.722744 containerd[1645]: time="2025-12-13T00:26:42.722697732Z" level=info msg="StartContainer for \"824b51587e6f8cce842de8362688815c15c9b8bdb59484e7bfada7d5fed33955\" returns successfully" Dec 13 00:26:42.737484 containerd[1645]: time="2025-12-13T00:26:42.737442810Z" level=info msg="StartContainer for \"f4c81522b6bfc4553e5dde6b0d264b7208477b5eb00d7b194f196a8bf88a280e\" returns successfully" Dec 13 00:26:42.746120 containerd[1645]: time="2025-12-13T00:26:42.746075758Z" level=info msg="StartContainer for \"3fd0c4079110fc426a571633cac6a60577e33293c1c32996dfe0e3737446c9f6\" returns successfully" Dec 13 00:26:43.181959 kubelet[2397]: I1213 00:26:43.181922 2397 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:26:43.200991 update_engine[1612]: I20251213 00:26:43.200052 1612 update_attempter.cc:509] Updating boot flags... Dec 13 00:26:43.571072 kubelet[2397]: E1213 00:26:43.570407 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:26:43.571072 kubelet[2397]: E1213 00:26:43.570529 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:43.577990 kubelet[2397]: E1213 00:26:43.577688 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:26:43.577990 kubelet[2397]: E1213 00:26:43.577822 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:43.578136 kubelet[2397]: E1213 00:26:43.578118 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:26:43.578265 kubelet[2397]: E1213 00:26:43.578251 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:44.383632 kubelet[2397]: E1213 00:26:44.383567 2397 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 00:26:44.512399 kubelet[2397]: I1213 00:26:44.512086 2397 apiserver.go:52] "Watching apiserver" Dec 13 00:26:44.524662 kubelet[2397]: I1213 00:26:44.524611 2397 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 13 00:26:44.567215 kubelet[2397]: I1213 00:26:44.567172 2397 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 00:26:44.578695 kubelet[2397]: I1213 00:26:44.578661 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:44.579187 kubelet[2397]: I1213 00:26:44.579140 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:44.625130 kubelet[2397]: I1213 00:26:44.625072 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:44.755142 kubelet[2397]: E1213 00:26:44.755098 2397 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:44.755142 kubelet[2397]: I1213 00:26:44.755153 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:44.755338 kubelet[2397]: E1213 00:26:44.755197 2397 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:44.755503 kubelet[2397]: E1213 00:26:44.755397 2397 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:44.755503 kubelet[2397]: E1213 00:26:44.755427 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:44.755649 kubelet[2397]: E1213 00:26:44.755578 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:44.756824 kubelet[2397]: E1213 00:26:44.756798 2397 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:44.756824 kubelet[2397]: I1213 00:26:44.756818 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:44.757920 kubelet[2397]: E1213 00:26:44.757900 2397 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:45.580368 kubelet[2397]: I1213 00:26:45.580324 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:45.600573 kubelet[2397]: E1213 00:26:45.600492 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:46.581725 kubelet[2397]: E1213 00:26:46.581687 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:47.306689 systemd[1]: Reload requested from client PID 2699 ('systemctl') (unit session-6.scope)... Dec 13 00:26:47.306706 systemd[1]: Reloading... Dec 13 00:26:47.400097 zram_generator::config[2745]: No configuration found. Dec 13 00:26:47.726659 systemd[1]: Reloading finished in 419 ms. Dec 13 00:26:47.768892 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:26:47.791063 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 00:26:47.791498 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:26:47.791573 systemd[1]: kubelet.service: Consumed 938ms CPU time, 129.4M memory peak. Dec 13 00:26:47.794021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:26:48.029917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:26:48.041344 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 00:26:48.089987 kubelet[2790]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:26:48.089987 kubelet[2790]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 00:26:48.089987 kubelet[2790]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:26:48.090469 kubelet[2790]: I1213 00:26:48.090063 2790 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 00:26:48.097943 kubelet[2790]: I1213 00:26:48.097889 2790 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 13 00:26:48.097943 kubelet[2790]: I1213 00:26:48.097917 2790 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 00:26:48.098191 kubelet[2790]: I1213 00:26:48.098167 2790 server.go:956] "Client rotation is on, will bootstrap in background" Dec 13 00:26:48.099523 kubelet[2790]: I1213 00:26:48.099499 2790 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 13 00:26:48.102105 kubelet[2790]: I1213 00:26:48.102033 2790 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 00:26:48.107912 kubelet[2790]: I1213 00:26:48.107860 2790 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 00:26:48.113278 kubelet[2790]: I1213 00:26:48.113241 2790 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 00:26:48.113544 kubelet[2790]: I1213 00:26:48.113510 2790 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 00:26:48.113735 kubelet[2790]: I1213 00:26:48.113541 2790 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 00:26:48.113820 kubelet[2790]: I1213 00:26:48.113745 2790 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 00:26:48.113820 kubelet[2790]: I1213 00:26:48.113755 2790 container_manager_linux.go:303] "Creating device plugin manager" Dec 13 00:26:48.114520 kubelet[2790]: I1213 00:26:48.114503 2790 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:26:48.114744 kubelet[2790]: I1213 00:26:48.114729 2790 kubelet.go:480] "Attempting to sync node with API server" Dec 13 00:26:48.114744 kubelet[2790]: I1213 00:26:48.114744 2790 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 00:26:48.114810 kubelet[2790]: I1213 00:26:48.114768 2790 kubelet.go:386] "Adding apiserver pod source" Dec 13 00:26:48.114810 kubelet[2790]: I1213 00:26:48.114784 2790 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 00:26:48.116268 kubelet[2790]: I1213 00:26:48.116240 2790 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 00:26:48.116660 kubelet[2790]: I1213 00:26:48.116637 2790 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 13 00:26:48.121003 kubelet[2790]: I1213 00:26:48.120985 2790 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 13 00:26:48.121059 kubelet[2790]: I1213 00:26:48.121034 2790 server.go:1289] "Started kubelet" Dec 13 00:26:48.122786 kubelet[2790]: I1213 00:26:48.122735 2790 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 00:26:48.122786 kubelet[2790]: I1213 00:26:48.122777 2790 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 00:26:48.124831 kubelet[2790]: I1213 00:26:48.124815 2790 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 00:26:48.126006 kubelet[2790]: E1213 00:26:48.125990 2790 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:26:48.126122 kubelet[2790]: I1213 00:26:48.126112 2790 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 13 00:26:48.126370 kubelet[2790]: I1213 00:26:48.126338 2790 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 13 00:26:48.126482 kubelet[2790]: I1213 00:26:48.126450 2790 reconciler.go:26] "Reconciler: start to sync state" Dec 13 00:26:48.126916 kubelet[2790]: I1213 00:26:48.123036 2790 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 00:26:48.129682 kubelet[2790]: I1213 00:26:48.129232 2790 server.go:317] "Adding debug handlers to kubelet server" Dec 13 00:26:48.129682 kubelet[2790]: I1213 00:26:48.129441 2790 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 00:26:48.133285 kubelet[2790]: I1213 00:26:48.133243 2790 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 00:26:48.135515 kubelet[2790]: E1213 00:26:48.135477 2790 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 00:26:48.136048 kubelet[2790]: I1213 00:26:48.136026 2790 factory.go:223] Registration of the containerd container factory successfully Dec 13 00:26:48.136048 kubelet[2790]: I1213 00:26:48.136041 2790 factory.go:223] Registration of the systemd container factory successfully Dec 13 00:26:48.137084 kubelet[2790]: I1213 00:26:48.137025 2790 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 13 00:26:48.147326 kubelet[2790]: I1213 00:26:48.147216 2790 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 13 00:26:48.147326 kubelet[2790]: I1213 00:26:48.147309 2790 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 13 00:26:48.147326 kubelet[2790]: I1213 00:26:48.147335 2790 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 00:26:48.147557 kubelet[2790]: I1213 00:26:48.147346 2790 kubelet.go:2436] "Starting kubelet main sync loop" Dec 13 00:26:48.147557 kubelet[2790]: E1213 00:26:48.147401 2790 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 00:26:48.191596 kubelet[2790]: I1213 00:26:48.190375 2790 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 00:26:48.191596 kubelet[2790]: I1213 00:26:48.190400 2790 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 00:26:48.191596 kubelet[2790]: I1213 00:26:48.190420 2790 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:26:48.191596 kubelet[2790]: I1213 00:26:48.190569 2790 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 00:26:48.191596 kubelet[2790]: I1213 00:26:48.190584 2790 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 00:26:48.191596 kubelet[2790]: I1213 00:26:48.190602 2790 policy_none.go:49] "None policy: Start" Dec 13 00:26:48.191596 kubelet[2790]: I1213 00:26:48.190618 2790 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 13 00:26:48.191596 kubelet[2790]: I1213 00:26:48.190628 2790 state_mem.go:35] "Initializing new in-memory state store" Dec 13 00:26:48.191596 kubelet[2790]: I1213 00:26:48.190752 2790 state_mem.go:75] "Updated machine memory state" Dec 13 00:26:48.197128 kubelet[2790]: E1213 00:26:48.197045 2790 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 13 00:26:48.197393 kubelet[2790]: I1213 00:26:48.197377 2790 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 00:26:48.197433 kubelet[2790]: I1213 00:26:48.197391 2790 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 00:26:48.197750 kubelet[2790]: I1213 00:26:48.197716 2790 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 00:26:48.199992 kubelet[2790]: E1213 00:26:48.199115 2790 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 00:26:48.249096 kubelet[2790]: I1213 00:26:48.248845 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:48.249388 kubelet[2790]: I1213 00:26:48.249309 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:48.249735 kubelet[2790]: I1213 00:26:48.249682 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:48.258686 kubelet[2790]: E1213 00:26:48.258624 2790 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:48.303279 kubelet[2790]: I1213 00:26:48.303162 2790 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:26:48.315567 kubelet[2790]: I1213 00:26:48.315527 2790 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 13 00:26:48.315715 kubelet[2790]: I1213 00:26:48.315624 2790 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 00:26:48.427226 kubelet[2790]: I1213 00:26:48.427177 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:48.427226 kubelet[2790]: I1213 00:26:48.427221 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:48.427429 kubelet[2790]: I1213 00:26:48.427256 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:48.427429 kubelet[2790]: I1213 00:26:48.427280 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:48.427429 kubelet[2790]: I1213 00:26:48.427312 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:48.427429 kubelet[2790]: I1213 00:26:48.427331 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa74f357f608b98e008ea2200b405bc6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa74f357f608b98e008ea2200b405bc6\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:48.427429 kubelet[2790]: I1213 00:26:48.427351 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa74f357f608b98e008ea2200b405bc6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa74f357f608b98e008ea2200b405bc6\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:48.427552 kubelet[2790]: I1213 00:26:48.427368 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:48.427552 kubelet[2790]: I1213 00:26:48.427391 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa74f357f608b98e008ea2200b405bc6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa74f357f608b98e008ea2200b405bc6\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:48.556138 kubelet[2790]: E1213 00:26:48.555875 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:48.556138 kubelet[2790]: E1213 00:26:48.555884 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:48.559238 kubelet[2790]: E1213 00:26:48.559151 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:49.115881 kubelet[2790]: I1213 00:26:49.115799 2790 apiserver.go:52] "Watching apiserver" Dec 13 00:26:49.126649 kubelet[2790]: I1213 00:26:49.126608 2790 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 13 00:26:49.163580 kubelet[2790]: I1213 00:26:49.163428 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:49.163580 kubelet[2790]: I1213 00:26:49.163537 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:49.164032 kubelet[2790]: E1213 00:26:49.163909 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:50.418505 kubelet[2790]: E1213 00:26:50.417930 2790 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:50.418505 kubelet[2790]: E1213 00:26:50.418271 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:50.419142 kubelet[2790]: E1213 00:26:50.419099 2790 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:50.419615 kubelet[2790]: E1213 00:26:50.419574 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:50.783067 kubelet[2790]: I1213 00:26:50.782647 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.782611938 podStartE2EDuration="5.782611938s" podCreationTimestamp="2025-12-13 00:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:26:50.419488584 +0000 UTC m=+2.364189459" watchObservedRunningTime="2025-12-13 00:26:50.782611938 +0000 UTC m=+2.727312823" Dec 13 00:26:51.167033 kubelet[2790]: E1213 00:26:51.166960 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:51.167181 kubelet[2790]: E1213 00:26:51.167147 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:51.172290 kubelet[2790]: I1213 00:26:51.172197 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.17217522 podStartE2EDuration="3.17217522s" podCreationTimestamp="2025-12-13 00:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:26:50.783250753 +0000 UTC m=+2.727951628" watchObservedRunningTime="2025-12-13 00:26:51.17217522 +0000 UTC m=+3.116876105" Dec 13 00:26:51.358861 kubelet[2790]: I1213 00:26:51.358669 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.358646527 podStartE2EDuration="3.358646527s" podCreationTimestamp="2025-12-13 00:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:26:51.172943428 +0000 UTC m=+3.117644313" watchObservedRunningTime="2025-12-13 00:26:51.358646527 +0000 UTC m=+3.303347412" Dec 13 00:26:51.594548 sudo[1791]: pam_unix(sudo:session): session closed for user root Dec 13 00:26:51.597210 sshd[1790]: Connection closed by 10.0.0.1 port 52276 Dec 13 00:26:51.597706 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Dec 13 00:26:51.602615 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:52276.service: Deactivated successfully. Dec 13 00:26:51.605502 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 00:26:51.605818 systemd[1]: session-6.scope: Consumed 5.888s CPU time, 191M memory peak. Dec 13 00:26:51.608361 systemd-logind[1607]: Session 6 logged out. Waiting for processes to exit. Dec 13 00:26:51.610398 systemd-logind[1607]: Removed session 6. Dec 13 00:26:52.432561 kubelet[2790]: I1213 00:26:52.432524 2790 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 00:26:52.433037 containerd[1645]: time="2025-12-13T00:26:52.432861034Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 00:26:52.433344 kubelet[2790]: I1213 00:26:52.433106 2790 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 00:26:53.487738 systemd[1]: Created slice kubepods-besteffort-poddfdf8c63_86e4_4b2f_9807_acd5f2232c9a.slice - libcontainer container kubepods-besteffort-poddfdf8c63_86e4_4b2f_9807_acd5f2232c9a.slice. Dec 13 00:26:53.510090 systemd[1]: Created slice kubepods-burstable-pod72f95f2b_0a87_4bb3_ae38_788daaa0ec56.slice - libcontainer container kubepods-burstable-pod72f95f2b_0a87_4bb3_ae38_788daaa0ec56.slice. Dec 13 00:26:53.559960 kubelet[2790]: I1213 00:26:53.559908 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/72f95f2b-0a87-4bb3-ae38-788daaa0ec56-cni-plugin\") pod \"kube-flannel-ds-dh7rc\" (UID: \"72f95f2b-0a87-4bb3-ae38-788daaa0ec56\") " pod="kube-flannel/kube-flannel-ds-dh7rc" Dec 13 00:26:53.559960 kubelet[2790]: I1213 00:26:53.559985 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/72f95f2b-0a87-4bb3-ae38-788daaa0ec56-cni\") pod \"kube-flannel-ds-dh7rc\" (UID: \"72f95f2b-0a87-4bb3-ae38-788daaa0ec56\") " pod="kube-flannel/kube-flannel-ds-dh7rc" Dec 13 00:26:53.559960 kubelet[2790]: I1213 00:26:53.560014 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/72f95f2b-0a87-4bb3-ae38-788daaa0ec56-flannel-cfg\") pod \"kube-flannel-ds-dh7rc\" (UID: \"72f95f2b-0a87-4bb3-ae38-788daaa0ec56\") " pod="kube-flannel/kube-flannel-ds-dh7rc" Dec 13 00:26:53.559960 kubelet[2790]: I1213 00:26:53.560029 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfdf8c63-86e4-4b2f-9807-acd5f2232c9a-xtables-lock\") pod \"kube-proxy-2xv8g\" (UID: \"dfdf8c63-86e4-4b2f-9807-acd5f2232c9a\") " pod="kube-system/kube-proxy-2xv8g" Dec 13 00:26:53.559960 kubelet[2790]: I1213 00:26:53.560044 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfdf8c63-86e4-4b2f-9807-acd5f2232c9a-lib-modules\") pod \"kube-proxy-2xv8g\" (UID: \"dfdf8c63-86e4-4b2f-9807-acd5f2232c9a\") " pod="kube-system/kube-proxy-2xv8g" Dec 13 00:26:53.561554 kubelet[2790]: I1213 00:26:53.560059 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2tt\" (UniqueName: \"kubernetes.io/projected/dfdf8c63-86e4-4b2f-9807-acd5f2232c9a-kube-api-access-lx2tt\") pod \"kube-proxy-2xv8g\" (UID: \"dfdf8c63-86e4-4b2f-9807-acd5f2232c9a\") " pod="kube-system/kube-proxy-2xv8g" Dec 13 00:26:53.561554 kubelet[2790]: I1213 00:26:53.560079 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72f95f2b-0a87-4bb3-ae38-788daaa0ec56-xtables-lock\") pod \"kube-flannel-ds-dh7rc\" (UID: \"72f95f2b-0a87-4bb3-ae38-788daaa0ec56\") " pod="kube-flannel/kube-flannel-ds-dh7rc" Dec 13 00:26:53.561554 kubelet[2790]: I1213 00:26:53.560095 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/72f95f2b-0a87-4bb3-ae38-788daaa0ec56-run\") pod \"kube-flannel-ds-dh7rc\" (UID: \"72f95f2b-0a87-4bb3-ae38-788daaa0ec56\") " pod="kube-flannel/kube-flannel-ds-dh7rc" Dec 13 00:26:53.561554 kubelet[2790]: I1213 00:26:53.560135 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb9sx\" (UniqueName: \"kubernetes.io/projected/72f95f2b-0a87-4bb3-ae38-788daaa0ec56-kube-api-access-sb9sx\") pod \"kube-flannel-ds-dh7rc\" (UID: \"72f95f2b-0a87-4bb3-ae38-788daaa0ec56\") " pod="kube-flannel/kube-flannel-ds-dh7rc" Dec 13 00:26:53.561554 kubelet[2790]: I1213 00:26:53.560223 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dfdf8c63-86e4-4b2f-9807-acd5f2232c9a-kube-proxy\") pod \"kube-proxy-2xv8g\" (UID: \"dfdf8c63-86e4-4b2f-9807-acd5f2232c9a\") " pod="kube-system/kube-proxy-2xv8g" Dec 13 00:26:53.807462 kubelet[2790]: E1213 00:26:53.807297 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:53.808111 containerd[1645]: time="2025-12-13T00:26:53.808049483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2xv8g,Uid:dfdf8c63-86e4-4b2f-9807-acd5f2232c9a,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:53.816510 kubelet[2790]: E1213 00:26:53.816472 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:53.817064 containerd[1645]: time="2025-12-13T00:26:53.817031964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dh7rc,Uid:72f95f2b-0a87-4bb3-ae38-788daaa0ec56,Namespace:kube-flannel,Attempt:0,}" Dec 13 00:26:53.855999 containerd[1645]: time="2025-12-13T00:26:53.854636308Z" level=info msg="connecting to shim debdd200d07e43b247611cb866a7ccb9323e067d162af1e985131e8881cce768" address="unix:///run/containerd/s/6e29dbfaeef684c7c0431c729a8f947aa26c69f9538cbfe05e99aecfc921df5e" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:53.855999 containerd[1645]: time="2025-12-13T00:26:53.855781081Z" level=info msg="connecting to shim 7b09cb555c31366516e2e40ef70e2b3e66103c2e2c69de3a19fac35195f81b2d" address="unix:///run/containerd/s/e5b2ca6db0148bc0f968e3e9edc9f0e1a882903fba773fb1712abc26c8ea6b55" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:53.902250 systemd[1]: Started cri-containerd-debdd200d07e43b247611cb866a7ccb9323e067d162af1e985131e8881cce768.scope - libcontainer container debdd200d07e43b247611cb866a7ccb9323e067d162af1e985131e8881cce768. Dec 13 00:26:53.906618 systemd[1]: Started cri-containerd-7b09cb555c31366516e2e40ef70e2b3e66103c2e2c69de3a19fac35195f81b2d.scope - libcontainer container 7b09cb555c31366516e2e40ef70e2b3e66103c2e2c69de3a19fac35195f81b2d. Dec 13 00:26:53.940699 containerd[1645]: time="2025-12-13T00:26:53.940646413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2xv8g,Uid:dfdf8c63-86e4-4b2f-9807-acd5f2232c9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b09cb555c31366516e2e40ef70e2b3e66103c2e2c69de3a19fac35195f81b2d\"" Dec 13 00:26:53.941578 kubelet[2790]: E1213 00:26:53.941409 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:53.947510 containerd[1645]: time="2025-12-13T00:26:53.947457998Z" level=info msg="CreateContainer within sandbox \"7b09cb555c31366516e2e40ef70e2b3e66103c2e2c69de3a19fac35195f81b2d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 00:26:53.960641 containerd[1645]: time="2025-12-13T00:26:53.960592382Z" level=info msg="Container 9f5b07332bdb9eabfa2fe7ca946e7a87a035ccacf59c1b57c8828c2163fdcc5c: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:53.961431 containerd[1645]: time="2025-12-13T00:26:53.961404703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dh7rc,Uid:72f95f2b-0a87-4bb3-ae38-788daaa0ec56,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"debdd200d07e43b247611cb866a7ccb9323e067d162af1e985131e8881cce768\"" Dec 13 00:26:53.962777 kubelet[2790]: E1213 00:26:53.962392 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:53.963881 containerd[1645]: time="2025-12-13T00:26:53.963854029Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Dec 13 00:26:53.972707 containerd[1645]: time="2025-12-13T00:26:53.972658907Z" level=info msg="CreateContainer within sandbox \"7b09cb555c31366516e2e40ef70e2b3e66103c2e2c69de3a19fac35195f81b2d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9f5b07332bdb9eabfa2fe7ca946e7a87a035ccacf59c1b57c8828c2163fdcc5c\"" Dec 13 00:26:53.973201 containerd[1645]: time="2025-12-13T00:26:53.973169223Z" level=info msg="StartContainer for \"9f5b07332bdb9eabfa2fe7ca946e7a87a035ccacf59c1b57c8828c2163fdcc5c\"" Dec 13 00:26:53.974601 containerd[1645]: time="2025-12-13T00:26:53.974575125Z" level=info msg="connecting to shim 9f5b07332bdb9eabfa2fe7ca946e7a87a035ccacf59c1b57c8828c2163fdcc5c" address="unix:///run/containerd/s/e5b2ca6db0148bc0f968e3e9edc9f0e1a882903fba773fb1712abc26c8ea6b55" protocol=ttrpc version=3 Dec 13 00:26:53.997179 systemd[1]: Started cri-containerd-9f5b07332bdb9eabfa2fe7ca946e7a87a035ccacf59c1b57c8828c2163fdcc5c.scope - libcontainer container 9f5b07332bdb9eabfa2fe7ca946e7a87a035ccacf59c1b57c8828c2163fdcc5c. Dec 13 00:26:54.085887 containerd[1645]: time="2025-12-13T00:26:54.085763682Z" level=info msg="StartContainer for \"9f5b07332bdb9eabfa2fe7ca946e7a87a035ccacf59c1b57c8828c2163fdcc5c\" returns successfully" Dec 13 00:26:54.174718 kubelet[2790]: E1213 00:26:54.174656 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:55.708463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523261329.mount: Deactivated successfully. Dec 13 00:26:55.790040 containerd[1645]: time="2025-12-13T00:26:55.789941020Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:55.790878 containerd[1645]: time="2025-12-13T00:26:55.790818223Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4850109" Dec 13 00:26:55.792102 containerd[1645]: time="2025-12-13T00:26:55.792069627Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:55.794802 containerd[1645]: time="2025-12-13T00:26:55.794760957Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:55.795862 containerd[1645]: time="2025-12-13T00:26:55.795820862Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1.831929222s" Dec 13 00:26:55.795900 containerd[1645]: time="2025-12-13T00:26:55.795859585Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Dec 13 00:26:55.800881 containerd[1645]: time="2025-12-13T00:26:55.800834663Z" level=info msg="CreateContainer within sandbox \"debdd200d07e43b247611cb866a7ccb9323e067d162af1e985131e8881cce768\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 00:26:55.808191 containerd[1645]: time="2025-12-13T00:26:55.808139033Z" level=info msg="Container e5b9a5ef63215f88296acd604d416c883d3d5c5ddb739ded52f7da81754855af: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:55.812384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038634660.mount: Deactivated successfully. Dec 13 00:26:55.815185 containerd[1645]: time="2025-12-13T00:26:55.815151616Z" level=info msg="CreateContainer within sandbox \"debdd200d07e43b247611cb866a7ccb9323e067d162af1e985131e8881cce768\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"e5b9a5ef63215f88296acd604d416c883d3d5c5ddb739ded52f7da81754855af\"" Dec 13 00:26:55.815739 containerd[1645]: time="2025-12-13T00:26:55.815703059Z" level=info msg="StartContainer for \"e5b9a5ef63215f88296acd604d416c883d3d5c5ddb739ded52f7da81754855af\"" Dec 13 00:26:55.816735 containerd[1645]: time="2025-12-13T00:26:55.816705045Z" level=info msg="connecting to shim e5b9a5ef63215f88296acd604d416c883d3d5c5ddb739ded52f7da81754855af" address="unix:///run/containerd/s/6e29dbfaeef684c7c0431c729a8f947aa26c69f9538cbfe05e99aecfc921df5e" protocol=ttrpc version=3 Dec 13 00:26:55.845174 systemd[1]: Started cri-containerd-e5b9a5ef63215f88296acd604d416c883d3d5c5ddb739ded52f7da81754855af.scope - libcontainer container e5b9a5ef63215f88296acd604d416c883d3d5c5ddb739ded52f7da81754855af. Dec 13 00:26:55.878351 systemd[1]: cri-containerd-e5b9a5ef63215f88296acd604d416c883d3d5c5ddb739ded52f7da81754855af.scope: Deactivated successfully. Dec 13 00:26:55.897864 containerd[1645]: time="2025-12-13T00:26:55.897794294Z" level=info msg="received container exit event container_id:\"e5b9a5ef63215f88296acd604d416c883d3d5c5ddb739ded52f7da81754855af\" id:\"e5b9a5ef63215f88296acd604d416c883d3d5c5ddb739ded52f7da81754855af\" pid:3138 exited_at:{seconds:1765585615 nanos:879591032}" Dec 13 00:26:55.899454 containerd[1645]: time="2025-12-13T00:26:55.899414208Z" level=info msg="StartContainer for \"e5b9a5ef63215f88296acd604d416c883d3d5c5ddb739ded52f7da81754855af\" returns successfully" Dec 13 00:26:55.922768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5b9a5ef63215f88296acd604d416c883d3d5c5ddb739ded52f7da81754855af-rootfs.mount: Deactivated successfully. Dec 13 00:26:56.179699 kubelet[2790]: E1213 00:26:56.179659 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:56.181982 containerd[1645]: time="2025-12-13T00:26:56.181908297Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Dec 13 00:26:56.225530 kubelet[2790]: I1213 00:26:56.224739 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2xv8g" podStartSLOduration=3.224715025 podStartE2EDuration="3.224715025s" podCreationTimestamp="2025-12-13 00:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:26:54.185547065 +0000 UTC m=+6.130247960" watchObservedRunningTime="2025-12-13 00:26:56.224715025 +0000 UTC m=+8.169415900" Dec 13 00:26:56.393475 kubelet[2790]: E1213 00:26:56.393426 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:56.803135 kubelet[2790]: E1213 00:26:56.803089 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:57.180513 kubelet[2790]: E1213 00:26:57.180482 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:57.181108 kubelet[2790]: E1213 00:26:57.180780 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:58.665270 kubelet[2790]: E1213 00:26:58.665234 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:59.184362 kubelet[2790]: E1213 00:26:59.184328 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:02.611764 containerd[1645]: time="2025-12-13T00:27:02.611680618Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:27:02.616731 containerd[1645]: time="2025-12-13T00:27:02.616691788Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=26948816" Dec 13 00:27:02.618326 containerd[1645]: time="2025-12-13T00:27:02.618295583Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:27:02.622787 containerd[1645]: time="2025-12-13T00:27:02.622715685Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:27:02.624203 containerd[1645]: time="2025-12-13T00:27:02.624141557Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 6.44218471s" Dec 13 00:27:02.624203 containerd[1645]: time="2025-12-13T00:27:02.624182304Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Dec 13 00:27:02.629018 containerd[1645]: time="2025-12-13T00:27:02.628943084Z" level=info msg="CreateContainer within sandbox \"debdd200d07e43b247611cb866a7ccb9323e067d162af1e985131e8881cce768\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 00:27:02.637465 containerd[1645]: time="2025-12-13T00:27:02.637397766Z" level=info msg="Container 86f73b4880966de0cc36f45c28bcb4d9089992b2dc3c2e32856ab7632bc50cc0: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:27:02.641775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755622431.mount: Deactivated successfully. Dec 13 00:27:02.924699 containerd[1645]: time="2025-12-13T00:27:02.924638207Z" level=info msg="CreateContainer within sandbox \"debdd200d07e43b247611cb866a7ccb9323e067d162af1e985131e8881cce768\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"86f73b4880966de0cc36f45c28bcb4d9089992b2dc3c2e32856ab7632bc50cc0\"" Dec 13 00:27:02.925419 containerd[1645]: time="2025-12-13T00:27:02.925388352Z" level=info msg="StartContainer for \"86f73b4880966de0cc36f45c28bcb4d9089992b2dc3c2e32856ab7632bc50cc0\"" Dec 13 00:27:02.926667 containerd[1645]: time="2025-12-13T00:27:02.926620341Z" level=info msg="connecting to shim 86f73b4880966de0cc36f45c28bcb4d9089992b2dc3c2e32856ab7632bc50cc0" address="unix:///run/containerd/s/6e29dbfaeef684c7c0431c729a8f947aa26c69f9538cbfe05e99aecfc921df5e" protocol=ttrpc version=3 Dec 13 00:27:02.956213 systemd[1]: Started cri-containerd-86f73b4880966de0cc36f45c28bcb4d9089992b2dc3c2e32856ab7632bc50cc0.scope - libcontainer container 86f73b4880966de0cc36f45c28bcb4d9089992b2dc3c2e32856ab7632bc50cc0. Dec 13 00:27:02.987753 systemd[1]: cri-containerd-86f73b4880966de0cc36f45c28bcb4d9089992b2dc3c2e32856ab7632bc50cc0.scope: Deactivated successfully. Dec 13 00:27:03.015006 kubelet[2790]: I1213 00:27:03.014945 2790 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 13 00:27:03.040348 containerd[1645]: time="2025-12-13T00:27:03.040232918Z" level=info msg="received container exit event container_id:\"86f73b4880966de0cc36f45c28bcb4d9089992b2dc3c2e32856ab7632bc50cc0\" id:\"86f73b4880966de0cc36f45c28bcb4d9089992b2dc3c2e32856ab7632bc50cc0\" pid:3213 exited_at:{seconds:1765585622 nanos:988141327}" Dec 13 00:27:03.042121 containerd[1645]: time="2025-12-13T00:27:03.042070652Z" level=info msg="StartContainer for \"86f73b4880966de0cc36f45c28bcb4d9089992b2dc3c2e32856ab7632bc50cc0\" returns successfully" Dec 13 00:27:03.068017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86f73b4880966de0cc36f45c28bcb4d9089992b2dc3c2e32856ab7632bc50cc0-rootfs.mount: Deactivated successfully. Dec 13 00:27:03.716051 kubelet[2790]: E1213 00:27:03.715884 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:03.725437 systemd[1]: Created slice kubepods-burstable-podc2f822c6_ec1f_4722_8dd1_1400cdf5b7e2.slice - libcontainer container kubepods-burstable-podc2f822c6_ec1f_4722_8dd1_1400cdf5b7e2.slice. Dec 13 00:27:03.726396 kubelet[2790]: I1213 00:27:03.726364 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2f822c6-ec1f-4722-8dd1-1400cdf5b7e2-config-volume\") pod \"coredns-674b8bbfcf-gwdnl\" (UID: \"c2f822c6-ec1f-4722-8dd1-1400cdf5b7e2\") " pod="kube-system/coredns-674b8bbfcf-gwdnl" Dec 13 00:27:03.726475 kubelet[2790]: I1213 00:27:03.726431 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkqw\" (UniqueName: \"kubernetes.io/projected/c2f822c6-ec1f-4722-8dd1-1400cdf5b7e2-kube-api-access-5gkqw\") pod \"coredns-674b8bbfcf-gwdnl\" (UID: \"c2f822c6-ec1f-4722-8dd1-1400cdf5b7e2\") " pod="kube-system/coredns-674b8bbfcf-gwdnl" Dec 13 00:27:03.731483 systemd[1]: Created slice kubepods-burstable-pod6dffcd3b_28f4_46af_a675_cec8eb0aab2e.slice - libcontainer container kubepods-burstable-pod6dffcd3b_28f4_46af_a675_cec8eb0aab2e.slice. Dec 13 00:27:03.827590 kubelet[2790]: I1213 00:27:03.827528 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dffcd3b-28f4-46af-a675-cec8eb0aab2e-config-volume\") pod \"coredns-674b8bbfcf-2nxtm\" (UID: \"6dffcd3b-28f4-46af-a675-cec8eb0aab2e\") " pod="kube-system/coredns-674b8bbfcf-2nxtm" Dec 13 00:27:03.827590 kubelet[2790]: I1213 00:27:03.827592 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lhkv\" (UniqueName: \"kubernetes.io/projected/6dffcd3b-28f4-46af-a675-cec8eb0aab2e-kube-api-access-5lhkv\") pod \"coredns-674b8bbfcf-2nxtm\" (UID: \"6dffcd3b-28f4-46af-a675-cec8eb0aab2e\") " pod="kube-system/coredns-674b8bbfcf-2nxtm" Dec 13 00:27:04.030371 kubelet[2790]: E1213 00:27:04.030061 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:04.031002 containerd[1645]: time="2025-12-13T00:27:04.030898314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gwdnl,Uid:c2f822c6-ec1f-4722-8dd1-1400cdf5b7e2,Namespace:kube-system,Attempt:0,}" Dec 13 00:27:04.035893 kubelet[2790]: E1213 00:27:04.035499 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:04.036501 containerd[1645]: time="2025-12-13T00:27:04.036435230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2nxtm,Uid:6dffcd3b-28f4-46af-a675-cec8eb0aab2e,Namespace:kube-system,Attempt:0,}" Dec 13 00:27:04.200464 kubelet[2790]: E1213 00:27:04.199464 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:04.283440 systemd[1]: run-netns-cni\x2de4ac79cc\x2da001\x2dbc16\x2d342b\x2de487056245e0.mount: Deactivated successfully. Dec 13 00:27:04.373564 containerd[1645]: time="2025-12-13T00:27:04.373501428Z" level=info msg="CreateContainer within sandbox \"debdd200d07e43b247611cb866a7ccb9323e067d162af1e985131e8881cce768\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 00:27:04.423435 containerd[1645]: time="2025-12-13T00:27:04.423327201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gwdnl,Uid:c2f822c6-ec1f-4722-8dd1-1400cdf5b7e2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"264244a6e64eb6cce03c7cc9f1f65811f907dcb481fb386005a63b4f3f656c8a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 00:27:04.423720 kubelet[2790]: E1213 00:27:04.423673 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"264244a6e64eb6cce03c7cc9f1f65811f907dcb481fb386005a63b4f3f656c8a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 00:27:04.423797 kubelet[2790]: E1213 00:27:04.423760 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"264244a6e64eb6cce03c7cc9f1f65811f907dcb481fb386005a63b4f3f656c8a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-gwdnl" Dec 13 00:27:04.423831 kubelet[2790]: E1213 00:27:04.423796 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"264244a6e64eb6cce03c7cc9f1f65811f907dcb481fb386005a63b4f3f656c8a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-gwdnl" Dec 13 00:27:04.423895 kubelet[2790]: E1213 00:27:04.423858 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gwdnl_kube-system(c2f822c6-ec1f-4722-8dd1-1400cdf5b7e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gwdnl_kube-system(c2f822c6-ec1f-4722-8dd1-1400cdf5b7e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"264244a6e64eb6cce03c7cc9f1f65811f907dcb481fb386005a63b4f3f656c8a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-gwdnl" podUID="c2f822c6-ec1f-4722-8dd1-1400cdf5b7e2" Dec 13 00:27:04.556109 containerd[1645]: time="2025-12-13T00:27:04.555907991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2nxtm,Uid:6dffcd3b-28f4-46af-a675-cec8eb0aab2e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f12c05d66980539a50f2081cc5894d7a1ab308caaebca443c31fd01fe83a4534\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 00:27:04.556343 kubelet[2790]: E1213 00:27:04.556289 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f12c05d66980539a50f2081cc5894d7a1ab308caaebca443c31fd01fe83a4534\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 00:27:04.556416 kubelet[2790]: E1213 00:27:04.556374 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f12c05d66980539a50f2081cc5894d7a1ab308caaebca443c31fd01fe83a4534\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-2nxtm" Dec 13 00:27:04.556453 kubelet[2790]: E1213 00:27:04.556414 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f12c05d66980539a50f2081cc5894d7a1ab308caaebca443c31fd01fe83a4534\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-2nxtm" Dec 13 00:27:04.556545 kubelet[2790]: E1213 00:27:04.556486 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2nxtm_kube-system(6dffcd3b-28f4-46af-a675-cec8eb0aab2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2nxtm_kube-system(6dffcd3b-28f4-46af-a675-cec8eb0aab2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f12c05d66980539a50f2081cc5894d7a1ab308caaebca443c31fd01fe83a4534\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-2nxtm" podUID="6dffcd3b-28f4-46af-a675-cec8eb0aab2e" Dec 13 00:27:04.794606 containerd[1645]: time="2025-12-13T00:27:04.794549164Z" level=info msg="Container e5c6c1d5340a554d770f2278a8ec8b0d925f38dc5e0dcf458705951fac5c16c4: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:27:04.851428 systemd[1]: run-netns-cni\x2dd7fd3c04\x2db982\x2dbb2e\x2da246\x2da9c345508ea9.mount: Deactivated successfully. Dec 13 00:27:05.018887 containerd[1645]: time="2025-12-13T00:27:05.018819669Z" level=info msg="CreateContainer within sandbox \"debdd200d07e43b247611cb866a7ccb9323e067d162af1e985131e8881cce768\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e5c6c1d5340a554d770f2278a8ec8b0d925f38dc5e0dcf458705951fac5c16c4\"" Dec 13 00:27:05.019549 containerd[1645]: time="2025-12-13T00:27:05.019506155Z" level=info msg="StartContainer for \"e5c6c1d5340a554d770f2278a8ec8b0d925f38dc5e0dcf458705951fac5c16c4\"" Dec 13 00:27:05.020525 containerd[1645]: time="2025-12-13T00:27:05.020494799Z" level=info msg="connecting to shim e5c6c1d5340a554d770f2278a8ec8b0d925f38dc5e0dcf458705951fac5c16c4" address="unix:///run/containerd/s/6e29dbfaeef684c7c0431c729a8f947aa26c69f9538cbfe05e99aecfc921df5e" protocol=ttrpc version=3 Dec 13 00:27:05.050278 systemd[1]: Started cri-containerd-e5c6c1d5340a554d770f2278a8ec8b0d925f38dc5e0dcf458705951fac5c16c4.scope - libcontainer container e5c6c1d5340a554d770f2278a8ec8b0d925f38dc5e0dcf458705951fac5c16c4. Dec 13 00:27:05.285886 containerd[1645]: time="2025-12-13T00:27:05.285610909Z" level=info msg="StartContainer for \"e5c6c1d5340a554d770f2278a8ec8b0d925f38dc5e0dcf458705951fac5c16c4\" returns successfully" Dec 13 00:27:06.290261 kubelet[2790]: E1213 00:27:06.290210 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:06.725368 systemd-networkd[1320]: flannel.1: Link UP Dec 13 00:27:06.725379 systemd-networkd[1320]: flannel.1: Gained carrier Dec 13 00:27:07.292719 kubelet[2790]: E1213 00:27:07.292681 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:07.959148 systemd-networkd[1320]: flannel.1: Gained IPv6LL Dec 13 00:27:17.148713 kubelet[2790]: E1213 00:27:17.148644 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:17.149242 containerd[1645]: time="2025-12-13T00:27:17.149175373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2nxtm,Uid:6dffcd3b-28f4-46af-a675-cec8eb0aab2e,Namespace:kube-system,Attempt:0,}" Dec 13 00:27:17.178098 systemd-networkd[1320]: cni0: Link UP Dec 13 00:27:17.178131 systemd-networkd[1320]: cni0: Gained carrier Dec 13 00:27:17.183473 systemd-networkd[1320]: cni0: Lost carrier Dec 13 00:27:17.186388 systemd-networkd[1320]: veth47cffb50: Link UP Dec 13 00:27:17.189450 kernel: cni0: port 1(veth47cffb50) entered blocking state Dec 13 00:27:17.189507 kernel: cni0: port 1(veth47cffb50) entered disabled state Dec 13 00:27:17.189525 kernel: veth47cffb50: entered allmulticast mode Dec 13 00:27:17.191815 kernel: veth47cffb50: entered promiscuous mode Dec 13 00:27:17.198733 kernel: cni0: port 1(veth47cffb50) entered blocking state Dec 13 00:27:17.198786 kernel: cni0: port 1(veth47cffb50) entered forwarding state Dec 13 00:27:17.199144 systemd-networkd[1320]: veth47cffb50: Gained carrier Dec 13 00:27:17.202130 systemd-networkd[1320]: cni0: Gained carrier Dec 13 00:27:17.205246 containerd[1645]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Dec 13 00:27:17.205246 containerd[1645]: delegateAdd: netconf sent to delegate plugin: Dec 13 00:27:17.231989 containerd[1645]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-13T00:27:17.231831905Z" level=info msg="connecting to shim 283cdd13014c78bcfa4acc67bcd9741d8bdd2d21388ce0e60ec4e8014862b321" address="unix:///run/containerd/s/65e826137fe9700018bb92f79566c5fabccf6f30815675da80c704e4b8c2ccce" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:27:17.259147 systemd[1]: Started cri-containerd-283cdd13014c78bcfa4acc67bcd9741d8bdd2d21388ce0e60ec4e8014862b321.scope - libcontainer container 283cdd13014c78bcfa4acc67bcd9741d8bdd2d21388ce0e60ec4e8014862b321. Dec 13 00:27:17.273994 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:27:17.309516 containerd[1645]: time="2025-12-13T00:27:17.309474415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2nxtm,Uid:6dffcd3b-28f4-46af-a675-cec8eb0aab2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"283cdd13014c78bcfa4acc67bcd9741d8bdd2d21388ce0e60ec4e8014862b321\"" Dec 13 00:27:17.310309 kubelet[2790]: E1213 00:27:17.310267 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:17.315535 containerd[1645]: time="2025-12-13T00:27:17.315499501Z" level=info msg="CreateContainer within sandbox \"283cdd13014c78bcfa4acc67bcd9741d8bdd2d21388ce0e60ec4e8014862b321\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 00:27:17.325230 containerd[1645]: time="2025-12-13T00:27:17.325178967Z" level=info msg="Container 4628e450306c176c21d2776522f36fd07ca9baed60a02b835b04d581dfae8b3b: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:27:17.329040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2213311434.mount: Deactivated successfully. Dec 13 00:27:17.332503 containerd[1645]: time="2025-12-13T00:27:17.332457393Z" level=info msg="CreateContainer within sandbox \"283cdd13014c78bcfa4acc67bcd9741d8bdd2d21388ce0e60ec4e8014862b321\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4628e450306c176c21d2776522f36fd07ca9baed60a02b835b04d581dfae8b3b\"" Dec 13 00:27:17.333056 containerd[1645]: time="2025-12-13T00:27:17.333021670Z" level=info msg="StartContainer for \"4628e450306c176c21d2776522f36fd07ca9baed60a02b835b04d581dfae8b3b\"" Dec 13 00:27:17.333848 containerd[1645]: time="2025-12-13T00:27:17.333823383Z" level=info msg="connecting to shim 4628e450306c176c21d2776522f36fd07ca9baed60a02b835b04d581dfae8b3b" address="unix:///run/containerd/s/65e826137fe9700018bb92f79566c5fabccf6f30815675da80c704e4b8c2ccce" protocol=ttrpc version=3 Dec 13 00:27:17.358204 systemd[1]: Started cri-containerd-4628e450306c176c21d2776522f36fd07ca9baed60a02b835b04d581dfae8b3b.scope - libcontainer container 4628e450306c176c21d2776522f36fd07ca9baed60a02b835b04d581dfae8b3b. Dec 13 00:27:17.394612 containerd[1645]: time="2025-12-13T00:27:17.394569317Z" level=info msg="StartContainer for \"4628e450306c176c21d2776522f36fd07ca9baed60a02b835b04d581dfae8b3b\" returns successfully" Dec 13 00:27:18.317555 kubelet[2790]: E1213 00:27:18.317437 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:18.327270 kubelet[2790]: I1213 00:27:18.327204 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-dh7rc" podStartSLOduration=16.665705643 podStartE2EDuration="25.32718869s" podCreationTimestamp="2025-12-13 00:26:53 +0000 UTC" firstStartedPulling="2025-12-13 00:26:53.963535974 +0000 UTC m=+5.908236859" lastFinishedPulling="2025-12-13 00:27:02.625019011 +0000 UTC m=+14.569719906" observedRunningTime="2025-12-13 00:27:06.723270277 +0000 UTC m=+18.667971183" watchObservedRunningTime="2025-12-13 00:27:18.32718869 +0000 UTC m=+30.271889575" Dec 13 00:27:18.327490 kubelet[2790]: I1213 00:27:18.327348 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2nxtm" podStartSLOduration=25.327340795 podStartE2EDuration="25.327340795s" podCreationTimestamp="2025-12-13 00:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:27:18.326915337 +0000 UTC m=+30.271616223" watchObservedRunningTime="2025-12-13 00:27:18.327340795 +0000 UTC m=+30.272041680" Dec 13 00:27:18.839195 systemd-networkd[1320]: veth47cffb50: Gained IPv6LL Dec 13 00:27:19.095187 systemd-networkd[1320]: cni0: Gained IPv6LL Dec 13 00:27:19.148332 kubelet[2790]: E1213 00:27:19.148273 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:19.148817 containerd[1645]: time="2025-12-13T00:27:19.148751612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gwdnl,Uid:c2f822c6-ec1f-4722-8dd1-1400cdf5b7e2,Namespace:kube-system,Attempt:0,}" Dec 13 00:27:19.168591 systemd-networkd[1320]: vethe6f90bff: Link UP Dec 13 00:27:19.172056 kernel: cni0: port 2(vethe6f90bff) entered blocking state Dec 13 00:27:19.172137 kernel: cni0: port 2(vethe6f90bff) entered disabled state Dec 13 00:27:19.172189 kernel: vethe6f90bff: entered allmulticast mode Dec 13 00:27:19.174734 kernel: vethe6f90bff: entered promiscuous mode Dec 13 00:27:19.182332 kernel: cni0: port 2(vethe6f90bff) entered blocking state Dec 13 00:27:19.182417 kernel: cni0: port 2(vethe6f90bff) entered forwarding state Dec 13 00:27:19.182482 systemd-networkd[1320]: vethe6f90bff: Gained carrier Dec 13 00:27:19.185759 containerd[1645]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0001047f0), "name":"cbr0", "type":"bridge"} Dec 13 00:27:19.185759 containerd[1645]: delegateAdd: netconf sent to delegate plugin: Dec 13 00:27:19.215027 containerd[1645]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-13T00:27:19.214926189Z" level=info msg="connecting to shim d7a8149505a4f03549f65a8fe0a9a30999e10ed4517cccae26181a74f92b8476" address="unix:///run/containerd/s/d35c79ea9cdf9f0c577fd6f10f5be2d88e265faf9597767a235443e92d11dfb6" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:27:19.247211 systemd[1]: Started cri-containerd-d7a8149505a4f03549f65a8fe0a9a30999e10ed4517cccae26181a74f92b8476.scope - libcontainer container d7a8149505a4f03549f65a8fe0a9a30999e10ed4517cccae26181a74f92b8476. Dec 13 00:27:19.262627 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:27:19.299447 containerd[1645]: time="2025-12-13T00:27:19.299386958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gwdnl,Uid:c2f822c6-ec1f-4722-8dd1-1400cdf5b7e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7a8149505a4f03549f65a8fe0a9a30999e10ed4517cccae26181a74f92b8476\"" Dec 13 00:27:19.300301 kubelet[2790]: E1213 00:27:19.300260 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:19.304903 containerd[1645]: time="2025-12-13T00:27:19.304855941Z" level=info msg="CreateContainer within sandbox \"d7a8149505a4f03549f65a8fe0a9a30999e10ed4517cccae26181a74f92b8476\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 00:27:19.317081 containerd[1645]: time="2025-12-13T00:27:19.317006150Z" level=info msg="Container ed8c75fd11d6f557d09356bcf760c7147896b60b0d2ff842562be5a1a2456d43: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:27:19.321507 kubelet[2790]: E1213 00:27:19.321457 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:19.324551 containerd[1645]: time="2025-12-13T00:27:19.324493788Z" level=info msg="CreateContainer within sandbox \"d7a8149505a4f03549f65a8fe0a9a30999e10ed4517cccae26181a74f92b8476\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed8c75fd11d6f557d09356bcf760c7147896b60b0d2ff842562be5a1a2456d43\"" Dec 13 00:27:19.325114 containerd[1645]: time="2025-12-13T00:27:19.325050402Z" level=info msg="StartContainer for \"ed8c75fd11d6f557d09356bcf760c7147896b60b0d2ff842562be5a1a2456d43\"" Dec 13 00:27:19.326167 containerd[1645]: time="2025-12-13T00:27:19.326115999Z" level=info msg="connecting to shim ed8c75fd11d6f557d09356bcf760c7147896b60b0d2ff842562be5a1a2456d43" address="unix:///run/containerd/s/d35c79ea9cdf9f0c577fd6f10f5be2d88e265faf9597767a235443e92d11dfb6" protocol=ttrpc version=3 Dec 13 00:27:19.356196 systemd[1]: Started cri-containerd-ed8c75fd11d6f557d09356bcf760c7147896b60b0d2ff842562be5a1a2456d43.scope - libcontainer container ed8c75fd11d6f557d09356bcf760c7147896b60b0d2ff842562be5a1a2456d43. Dec 13 00:27:19.393681 containerd[1645]: time="2025-12-13T00:27:19.393621481Z" level=info msg="StartContainer for \"ed8c75fd11d6f557d09356bcf760c7147896b60b0d2ff842562be5a1a2456d43\" returns successfully" Dec 13 00:27:20.324716 kubelet[2790]: E1213 00:27:20.324679 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:20.538509 kubelet[2790]: I1213 00:27:20.538435 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gwdnl" podStartSLOduration=27.538393597 podStartE2EDuration="27.538393597s" podCreationTimestamp="2025-12-13 00:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:27:20.441307815 +0000 UTC m=+32.386008700" watchObservedRunningTime="2025-12-13 00:27:20.538393597 +0000 UTC m=+32.483094492" Dec 13 00:27:20.759140 systemd-networkd[1320]: vethe6f90bff: Gained IPv6LL Dec 13 00:27:21.326949 kubelet[2790]: E1213 00:27:21.326881 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:22.328409 kubelet[2790]: E1213 00:27:22.328355 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:30.309026 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:51690.service - OpenSSH per-connection server daemon (10.0.0.1:51690). Dec 13 00:27:30.414806 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 51690 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:30.417798 sshd-session[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:30.423337 systemd-logind[1607]: New session 7 of user core. Dec 13 00:27:30.432222 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 00:27:30.588634 sshd[3739]: Connection closed by 10.0.0.1 port 51690 Dec 13 00:27:30.588869 sshd-session[3735]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:30.593901 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:51690.service: Deactivated successfully. Dec 13 00:27:30.596048 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 00:27:30.597168 systemd-logind[1607]: Session 7 logged out. Waiting for processes to exit. Dec 13 00:27:30.598502 systemd-logind[1607]: Removed session 7. Dec 13 00:27:35.604234 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:51706.service - OpenSSH per-connection server daemon (10.0.0.1:51706). Dec 13 00:27:35.668842 sshd[3778]: Accepted publickey for core from 10.0.0.1 port 51706 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:35.671749 sshd-session[3778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:35.678372 systemd-logind[1607]: New session 8 of user core. Dec 13 00:27:35.688316 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 00:27:35.883347 sshd[3782]: Connection closed by 10.0.0.1 port 51706 Dec 13 00:27:35.883559 sshd-session[3778]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:35.887741 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:51706.service: Deactivated successfully. Dec 13 00:27:35.889873 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 00:27:35.891759 systemd-logind[1607]: Session 8 logged out. Waiting for processes to exit. Dec 13 00:27:35.893150 systemd-logind[1607]: Removed session 8. Dec 13 00:27:40.900602 systemd[1]: Started sshd@7-10.0.0.113:22-10.0.0.1:52354.service - OpenSSH per-connection server daemon (10.0.0.1:52354). Dec 13 00:27:40.960146 sshd[3817]: Accepted publickey for core from 10.0.0.1 port 52354 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:40.963074 sshd-session[3817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:40.968363 systemd-logind[1607]: New session 9 of user core. Dec 13 00:27:40.979203 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 00:27:41.069739 sshd[3821]: Connection closed by 10.0.0.1 port 52354 Dec 13 00:27:41.070094 sshd-session[3817]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:41.075478 systemd[1]: sshd@7-10.0.0.113:22-10.0.0.1:52354.service: Deactivated successfully. Dec 13 00:27:41.077920 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 00:27:41.078949 systemd-logind[1607]: Session 9 logged out. Waiting for processes to exit. Dec 13 00:27:41.080332 systemd-logind[1607]: Removed session 9. Dec 13 00:27:46.088708 systemd[1]: Started sshd@8-10.0.0.113:22-10.0.0.1:52356.service - OpenSSH per-connection server daemon (10.0.0.1:52356). Dec 13 00:27:46.144915 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 52356 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:46.147300 sshd-session[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:46.152656 systemd-logind[1607]: New session 10 of user core. Dec 13 00:27:46.163245 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 00:27:46.387264 sshd[3859]: Connection closed by 10.0.0.1 port 52356 Dec 13 00:27:46.387613 sshd-session[3855]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:46.398641 systemd[1]: sshd@8-10.0.0.113:22-10.0.0.1:52356.service: Deactivated successfully. Dec 13 00:27:46.400703 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 00:27:46.401612 systemd-logind[1607]: Session 10 logged out. Waiting for processes to exit. Dec 13 00:27:46.404534 systemd[1]: Started sshd@9-10.0.0.113:22-10.0.0.1:52372.service - OpenSSH per-connection server daemon (10.0.0.1:52372). Dec 13 00:27:46.405594 systemd-logind[1607]: Removed session 10. Dec 13 00:27:46.457345 sshd[3874]: Accepted publickey for core from 10.0.0.1 port 52372 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:46.459307 sshd-session[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:46.464053 systemd-logind[1607]: New session 11 of user core. Dec 13 00:27:46.474142 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 00:27:46.937335 sshd[3878]: Connection closed by 10.0.0.1 port 52372 Dec 13 00:27:46.937641 sshd-session[3874]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:46.947825 systemd[1]: sshd@9-10.0.0.113:22-10.0.0.1:52372.service: Deactivated successfully. Dec 13 00:27:46.949746 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 00:27:46.950551 systemd-logind[1607]: Session 11 logged out. Waiting for processes to exit. Dec 13 00:27:46.956666 systemd[1]: Started sshd@10-10.0.0.113:22-10.0.0.1:52374.service - OpenSSH per-connection server daemon (10.0.0.1:52374). Dec 13 00:27:46.958360 systemd-logind[1607]: Removed session 11. Dec 13 00:27:47.019837 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 52374 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:47.022055 sshd-session[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:47.027591 systemd-logind[1607]: New session 12 of user core. Dec 13 00:27:47.037231 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 00:27:47.276375 sshd[3915]: Connection closed by 10.0.0.1 port 52374 Dec 13 00:27:47.276567 sshd-session[3901]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:47.282735 systemd[1]: sshd@10-10.0.0.113:22-10.0.0.1:52374.service: Deactivated successfully. Dec 13 00:27:47.285143 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 00:27:47.285924 systemd-logind[1607]: Session 12 logged out. Waiting for processes to exit. Dec 13 00:27:47.287053 systemd-logind[1607]: Removed session 12. Dec 13 00:27:52.294410 systemd[1]: Started sshd@11-10.0.0.113:22-10.0.0.1:41770.service - OpenSSH per-connection server daemon (10.0.0.1:41770). Dec 13 00:27:52.366454 sshd[3950]: Accepted publickey for core from 10.0.0.1 port 41770 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:52.368953 sshd-session[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:52.377263 systemd-logind[1607]: New session 13 of user core. Dec 13 00:27:52.392306 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 00:27:52.472774 sshd[3954]: Connection closed by 10.0.0.1 port 41770 Dec 13 00:27:52.473054 sshd-session[3950]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:52.476632 systemd[1]: sshd@11-10.0.0.113:22-10.0.0.1:41770.service: Deactivated successfully. Dec 13 00:27:52.478921 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 00:27:52.480548 systemd-logind[1607]: Session 13 logged out. Waiting for processes to exit. Dec 13 00:27:52.481857 systemd-logind[1607]: Removed session 13. Dec 13 00:27:57.487773 systemd[1]: Started sshd@12-10.0.0.113:22-10.0.0.1:41778.service - OpenSSH per-connection server daemon (10.0.0.1:41778). Dec 13 00:27:57.552506 sshd[3989]: Accepted publickey for core from 10.0.0.1 port 41778 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:57.554709 sshd-session[3989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:57.559466 systemd-logind[1607]: New session 14 of user core. Dec 13 00:27:57.570159 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 00:27:57.645853 sshd[3993]: Connection closed by 10.0.0.1 port 41778 Dec 13 00:27:57.646170 sshd-session[3989]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:57.651552 systemd[1]: sshd@12-10.0.0.113:22-10.0.0.1:41778.service: Deactivated successfully. Dec 13 00:27:57.653717 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 00:27:57.654737 systemd-logind[1607]: Session 14 logged out. Waiting for processes to exit. Dec 13 00:27:57.656021 systemd-logind[1607]: Removed session 14. Dec 13 00:28:00.150548 kubelet[2790]: E1213 00:28:00.150485 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:28:02.665357 systemd[1]: Started sshd@13-10.0.0.113:22-10.0.0.1:50904.service - OpenSSH per-connection server daemon (10.0.0.1:50904). Dec 13 00:28:02.738374 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 50904 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:28:02.741214 sshd-session[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:28:02.749372 systemd-logind[1607]: New session 15 of user core. Dec 13 00:28:02.760758 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 00:28:02.852212 sshd[4030]: Connection closed by 10.0.0.1 port 50904 Dec 13 00:28:02.852536 sshd-session[4026]: pam_unix(sshd:session): session closed for user core Dec 13 00:28:02.857081 systemd[1]: sshd@13-10.0.0.113:22-10.0.0.1:50904.service: Deactivated successfully. Dec 13 00:28:02.859219 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 00:28:02.860161 systemd-logind[1607]: Session 15 logged out. Waiting for processes to exit. Dec 13 00:28:02.861388 systemd-logind[1607]: Removed session 15. Dec 13 00:28:07.877721 systemd[1]: Started sshd@14-10.0.0.113:22-10.0.0.1:50908.service - OpenSSH per-connection server daemon (10.0.0.1:50908). Dec 13 00:28:07.936427 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 50908 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:28:07.938535 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:28:07.943768 systemd-logind[1607]: New session 16 of user core. Dec 13 00:28:07.953153 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 00:28:08.037866 sshd[4067]: Connection closed by 10.0.0.1 port 50908 Dec 13 00:28:08.038636 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Dec 13 00:28:08.055095 systemd[1]: sshd@14-10.0.0.113:22-10.0.0.1:50908.service: Deactivated successfully. Dec 13 00:28:08.057440 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 00:28:08.058597 systemd-logind[1607]: Session 16 logged out. Waiting for processes to exit. Dec 13 00:28:08.063373 systemd[1]: Started sshd@15-10.0.0.113:22-10.0.0.1:50922.service - OpenSSH per-connection server daemon (10.0.0.1:50922). Dec 13 00:28:08.066170 systemd-logind[1607]: Removed session 16. Dec 13 00:28:08.124031 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 50922 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:28:08.126905 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:28:08.132309 systemd-logind[1607]: New session 17 of user core. Dec 13 00:28:08.142157 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 00:28:08.377301 sshd[4084]: Connection closed by 10.0.0.1 port 50922 Dec 13 00:28:08.377500 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Dec 13 00:28:08.389663 systemd[1]: sshd@15-10.0.0.113:22-10.0.0.1:50922.service: Deactivated successfully. Dec 13 00:28:08.391800 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 00:28:08.392658 systemd-logind[1607]: Session 17 logged out. Waiting for processes to exit. Dec 13 00:28:08.395162 systemd[1]: Started sshd@16-10.0.0.113:22-10.0.0.1:50932.service - OpenSSH per-connection server daemon (10.0.0.1:50932). Dec 13 00:28:08.395832 systemd-logind[1607]: Removed session 17. Dec 13 00:28:08.454405 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 50932 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:28:08.456753 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:28:08.461397 systemd-logind[1607]: New session 18 of user core. Dec 13 00:28:08.473149 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 00:28:09.115603 sshd[4100]: Connection closed by 10.0.0.1 port 50932 Dec 13 00:28:09.116003 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Dec 13 00:28:09.130885 systemd[1]: sshd@16-10.0.0.113:22-10.0.0.1:50932.service: Deactivated successfully. Dec 13 00:28:09.135536 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 00:28:09.137830 systemd-logind[1607]: Session 18 logged out. Waiting for processes to exit. Dec 13 00:28:09.141002 systemd[1]: Started sshd@17-10.0.0.113:22-10.0.0.1:50940.service - OpenSSH per-connection server daemon (10.0.0.1:50940). Dec 13 00:28:09.142431 systemd-logind[1607]: Removed session 18. Dec 13 00:28:09.148193 kubelet[2790]: E1213 00:28:09.148143 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:28:09.204803 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 50940 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:28:09.207611 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:28:09.213126 systemd-logind[1607]: New session 19 of user core. Dec 13 00:28:09.220272 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 00:28:09.421123 sshd[4123]: Connection closed by 10.0.0.1 port 50940 Dec 13 00:28:09.421414 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Dec 13 00:28:09.436960 systemd[1]: sshd@17-10.0.0.113:22-10.0.0.1:50940.service: Deactivated successfully. Dec 13 00:28:09.439885 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 00:28:09.440847 systemd-logind[1607]: Session 19 logged out. Waiting for processes to exit. Dec 13 00:28:09.445828 systemd[1]: Started sshd@18-10.0.0.113:22-10.0.0.1:50952.service - OpenSSH per-connection server daemon (10.0.0.1:50952). Dec 13 00:28:09.446667 systemd-logind[1607]: Removed session 19. Dec 13 00:28:09.508147 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 50952 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:28:09.510694 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:28:09.516446 systemd-logind[1607]: New session 20 of user core. Dec 13 00:28:09.527343 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 00:28:09.608007 sshd[4138]: Connection closed by 10.0.0.1 port 50952 Dec 13 00:28:09.608314 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Dec 13 00:28:09.613629 systemd[1]: sshd@18-10.0.0.113:22-10.0.0.1:50952.service: Deactivated successfully. Dec 13 00:28:09.616132 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 00:28:09.617135 systemd-logind[1607]: Session 20 logged out. Waiting for processes to exit. Dec 13 00:28:09.618823 systemd-logind[1607]: Removed session 20. Dec 13 00:28:14.623847 systemd[1]: Started sshd@19-10.0.0.113:22-10.0.0.1:60132.service - OpenSSH per-connection server daemon (10.0.0.1:60132). Dec 13 00:28:14.683837 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 60132 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:28:14.686103 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:28:14.690836 systemd-logind[1607]: New session 21 of user core. Dec 13 00:28:14.701321 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 00:28:14.821987 sshd[4175]: Connection closed by 10.0.0.1 port 60132 Dec 13 00:28:14.822344 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Dec 13 00:28:14.828148 systemd[1]: sshd@19-10.0.0.113:22-10.0.0.1:60132.service: Deactivated successfully. Dec 13 00:28:14.830239 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 00:28:14.831193 systemd-logind[1607]: Session 21 logged out. Waiting for processes to exit. Dec 13 00:28:14.832495 systemd-logind[1607]: Removed session 21. Dec 13 00:28:19.846084 systemd[1]: Started sshd@20-10.0.0.113:22-10.0.0.1:60136.service - OpenSSH per-connection server daemon (10.0.0.1:60136). Dec 13 00:28:19.928495 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 60136 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:28:19.932253 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:28:19.944472 systemd-logind[1607]: New session 22 of user core. Dec 13 00:28:19.957374 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 00:28:20.043141 sshd[4214]: Connection closed by 10.0.0.1 port 60136 Dec 13 00:28:20.043454 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Dec 13 00:28:20.048031 systemd[1]: sshd@20-10.0.0.113:22-10.0.0.1:60136.service: Deactivated successfully. Dec 13 00:28:20.050439 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 00:28:20.051418 systemd-logind[1607]: Session 22 logged out. Waiting for processes to exit. Dec 13 00:28:20.053274 systemd-logind[1607]: Removed session 22. Dec 13 00:28:21.148640 kubelet[2790]: E1213 00:28:21.148571 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:28:24.148285 kubelet[2790]: E1213 00:28:24.148198 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:28:25.060190 systemd[1]: Started sshd@21-10.0.0.113:22-10.0.0.1:41168.service - OpenSSH per-connection server daemon (10.0.0.1:41168). Dec 13 00:28:25.121494 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 41168 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:28:25.123522 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:28:25.128238 systemd-logind[1607]: New session 23 of user core. Dec 13 00:28:25.138136 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 00:28:25.148518 kubelet[2790]: E1213 00:28:25.148470 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:28:25.233840 sshd[4255]: Connection closed by 10.0.0.1 port 41168 Dec 13 00:28:25.234180 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Dec 13 00:28:25.239095 systemd[1]: sshd@21-10.0.0.113:22-10.0.0.1:41168.service: Deactivated successfully. Dec 13 00:28:25.242087 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 00:28:25.244297 systemd-logind[1607]: Session 23 logged out. Waiting for processes to exit. Dec 13 00:28:25.245877 systemd-logind[1607]: Removed session 23. Dec 13 00:28:30.149822 kubelet[2790]: E1213 00:28:30.149515 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:28:30.252877 systemd[1]: Started sshd@22-10.0.0.113:22-10.0.0.1:35482.service - OpenSSH per-connection server daemon (10.0.0.1:35482). Dec 13 00:28:30.322933 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 35482 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:28:30.325479 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:28:30.332721 systemd-logind[1607]: New session 24 of user core. Dec 13 00:28:30.342175 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 00:28:30.528604 sshd[4292]: Connection closed by 10.0.0.1 port 35482 Dec 13 00:28:30.528318 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Dec 13 00:28:30.533702 systemd[1]: sshd@22-10.0.0.113:22-10.0.0.1:35482.service: Deactivated successfully. Dec 13 00:28:30.536042 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 00:28:30.537271 systemd-logind[1607]: Session 24 logged out. Waiting for processes to exit. Dec 13 00:28:30.538821 systemd-logind[1607]: Removed session 24. Dec 13 00:28:34.149213 kubelet[2790]: E1213 00:28:34.149139 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:28:35.544903 systemd[1]: Started sshd@23-10.0.0.113:22-10.0.0.1:35488.service - OpenSSH per-connection server daemon (10.0.0.1:35488). Dec 13 00:28:35.609456 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 35488 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:28:35.611776 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:28:35.617365 systemd-logind[1607]: New session 25 of user core. Dec 13 00:28:35.626316 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 00:28:35.692892 sshd[4329]: Connection closed by 10.0.0.1 port 35488 Dec 13 00:28:35.693198 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Dec 13 00:28:35.698770 systemd[1]: sshd@23-10.0.0.113:22-10.0.0.1:35488.service: Deactivated successfully. Dec 13 00:28:35.701272 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 00:28:35.702205 systemd-logind[1607]: Session 25 logged out. Waiting for processes to exit. Dec 13 00:28:35.703901 systemd-logind[1607]: Removed session 25.