Jan 24 00:49:36.295861 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:49:36.295884 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:49:36.295894 kernel: BIOS-provided physical RAM map: Jan 24 00:49:36.295900 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 00:49:36.295906 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 00:49:36.295911 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:49:36.295917 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 00:49:36.295922 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 00:49:36.295928 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:49:36.295936 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:49:36.295942 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:49:36.295947 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:49:36.295952 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:49:36.295958 kernel: NX (Execute Disable) protection: active Jan 24 00:49:36.295965 kernel: APIC: Static calls initialized Jan 24 00:49:36.295973 kernel: SMBIOS 2.8 present. Jan 24 00:49:36.295979 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 00:49:36.295984 kernel: Hypervisor detected: KVM Jan 24 00:49:36.295990 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:49:36.295996 kernel: kvm-clock: using sched offset of 5731141529 cycles Jan 24 00:49:36.296002 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:49:36.296008 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:49:36.296014 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:49:36.296020 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:49:36.296029 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:49:36.296035 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:49:36.296041 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:49:36.296047 kernel: Using GB pages for direct mapping Jan 24 00:49:36.296052 kernel: ACPI: Early table checksum verification disabled Jan 24 00:49:36.296058 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 00:49:36.296064 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:36.296070 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:36.296076 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:36.296084 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 00:49:36.296090 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:36.296096 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:36.296102 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:36.296108 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:36.296114 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 00:49:36.296120 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 00:49:36.296129 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 00:49:36.296138 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 00:49:36.296144 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 00:49:36.296150 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 00:49:36.296157 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 00:49:36.296163 kernel: No NUMA configuration found Jan 24 00:49:36.296169 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 00:49:36.296177 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 24 00:49:36.296183 kernel: Zone ranges: Jan 24 00:49:36.296190 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:49:36.296196 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 00:49:36.296202 kernel: Normal empty Jan 24 00:49:36.296208 kernel: Movable zone start for each node Jan 24 00:49:36.296214 kernel: Early memory node ranges Jan 24 00:49:36.296220 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:49:36.296226 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 00:49:36.296232 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 00:49:36.296241 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:49:36.296247 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:49:36.296254 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 00:49:36.296260 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:49:36.296266 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:49:36.296272 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:49:36.296278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:49:36.296284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:49:36.296291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:49:36.296299 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:49:36.296305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:49:36.296311 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:49:36.296317 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:49:36.296323 kernel: TSC deadline timer available Jan 24 00:49:36.296330 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 24 00:49:36.296336 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:49:36.296342 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:49:36.296348 kernel: kvm-guest: setup PV sched yield Jan 24 00:49:36.296356 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:49:36.296363 kernel: Booting paravirtualized kernel on KVM Jan 24 00:49:36.296369 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:49:36.296375 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:49:36.296381 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 24 00:49:36.296461 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 24 00:49:36.296468 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:49:36.296475 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:49:36.296481 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:49:36.296498 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:49:36.296713 kernel: random: crng init done Jan 24 00:49:36.296732 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:49:36.296738 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:49:36.296745 kernel: Fallback order for Node 0: 0 Jan 24 00:49:36.296751 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 24 00:49:36.296758 kernel: Policy zone: DMA32 Jan 24 00:49:36.296764 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:49:36.296781 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 24 00:49:36.296787 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:49:36.296793 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:49:36.296800 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:49:36.296806 kernel: Dynamic Preempt: voluntary Jan 24 00:49:36.296812 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:49:36.296826 kernel: rcu: RCU event tracing is enabled. Jan 24 00:49:36.296832 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:49:36.296839 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:49:36.296848 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:49:36.296854 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:49:36.296860 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:49:36.296866 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:49:36.296873 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:49:36.296879 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:49:36.296885 kernel: Console: colour VGA+ 80x25 Jan 24 00:49:36.296891 kernel: printk: console [ttyS0] enabled Jan 24 00:49:36.296897 kernel: ACPI: Core revision 20230628 Jan 24 00:49:36.296904 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:49:36.296912 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:49:36.296918 kernel: x2apic enabled Jan 24 00:49:36.296924 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:49:36.296931 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:49:36.296937 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:49:36.296943 kernel: kvm-guest: setup PV IPIs Jan 24 00:49:36.296949 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:49:36.296965 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:49:36.296972 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:49:36.296978 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:49:36.296985 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:49:36.296994 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:49:36.297000 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:49:36.297007 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:49:36.297013 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:49:36.297020 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:49:36.297029 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:49:36.297036 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:49:36.297042 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:49:36.297049 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:49:36.297056 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:49:36.297062 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:49:36.297068 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:49:36.297075 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:49:36.297084 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:49:36.297090 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:49:36.297097 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:49:36.297103 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:49:36.297110 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:49:36.297116 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:49:36.297123 kernel: landlock: Up and running. Jan 24 00:49:36.297129 kernel: SELinux: Initializing. Jan 24 00:49:36.297136 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:49:36.297144 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:49:36.297151 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:49:36.297158 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:49:36.297164 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:49:36.297292 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:49:36.297305 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:49:36.297312 kernel: signal: max sigframe size: 1776 Jan 24 00:49:36.297318 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:49:36.297325 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:49:36.297339 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:49:36.297346 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:49:36.297352 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:49:36.297359 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:49:36.297365 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:49:36.297372 kernel: smpboot: Max logical packages: 1 Jan 24 00:49:36.297378 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:49:36.297508 kernel: devtmpfs: initialized Jan 24 00:49:36.297523 kernel: x86/mm: Memory block size: 128MB Jan 24 00:49:36.297537 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:49:36.297543 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:49:36.297550 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:49:36.297556 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:49:36.297563 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:49:36.297569 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:49:36.297576 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:49:36.297582 kernel: audit: type=2000 audit(1769215773.220:1): state=initialized audit_enabled=0 res=1 Jan 24 00:49:36.297589 kernel: cpuidle: using governor menu Jan 24 00:49:36.297598 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:49:36.297604 kernel: dca service started, version 1.12.1 Jan 24 00:49:36.297611 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:49:36.297617 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:49:36.297747 kernel: PCI: Using configuration type 1 for base access Jan 24 00:49:36.297755 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:49:36.297761 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:49:36.297768 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:49:36.297774 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:49:36.297786 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:49:36.297792 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:49:36.297799 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:49:36.297805 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:49:36.297812 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:49:36.297818 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:49:36.297825 kernel: ACPI: Interpreter enabled Jan 24 00:49:36.297831 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:49:36.297838 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:49:36.297847 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:49:36.297883 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:49:36.297889 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:49:36.297896 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:49:36.298112 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:49:36.298250 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:49:36.298480 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:49:36.298497 kernel: PCI host bridge to bus 0000:00 Jan 24 00:49:36.298845 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:49:36.299037 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:49:36.299151 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:49:36.299263 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:49:36.299372 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:49:36.299566 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 00:49:36.299737 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:49:36.299882 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:49:36.300015 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:49:36.300136 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 24 00:49:36.300255 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 24 00:49:36.300375 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:49:36.300573 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:49:36.300764 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:49:36.300889 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 24 00:49:36.301115 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 24 00:49:36.301237 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:49:36.301365 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 24 00:49:36.301588 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 24 00:49:36.301761 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 24 00:49:36.302018 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:49:36.302150 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:49:36.302357 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 24 00:49:36.302595 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 24 00:49:36.302785 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 00:49:36.303034 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:49:36.303166 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:49:36.303300 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:49:36.303507 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:49:36.303672 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 24 00:49:36.303800 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 24 00:49:36.303929 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:49:36.304050 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 00:49:36.304064 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:49:36.304071 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:49:36.304078 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:49:36.304085 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:49:36.304092 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:49:36.304098 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:49:36.304105 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:49:36.304112 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:49:36.304118 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:49:36.304128 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:49:36.304134 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:49:36.304141 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:49:36.304148 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:49:36.304154 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:49:36.304161 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:49:36.304167 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:49:36.304174 kernel: iommu: Default domain type: Translated Jan 24 00:49:36.304181 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:49:36.304191 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:49:36.304198 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:49:36.304205 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 00:49:36.304287 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 00:49:36.304606 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:49:36.304774 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:49:36.304894 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:49:36.304904 kernel: vgaarb: loaded Jan 24 00:49:36.304920 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:49:36.304928 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:49:36.304935 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:49:36.304941 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:49:36.304948 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:49:36.304955 kernel: pnp: PnP ACPI init Jan 24 00:49:36.305095 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:49:36.305105 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:49:36.305117 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:49:36.305124 kernel: NET: Registered PF_INET protocol family Jan 24 00:49:36.305130 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:49:36.305137 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:49:36.305144 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:49:36.305150 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:49:36.305157 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:49:36.305164 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:49:36.305170 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:49:36.305180 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:49:36.305186 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:49:36.305193 kernel: NET: Registered PF_XDP protocol family Jan 24 00:49:36.305554 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:49:36.305847 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:49:36.306141 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:49:36.306261 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:49:36.306370 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:49:36.313592 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 00:49:36.313613 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:49:36.313621 kernel: Initialise system trusted keyrings Jan 24 00:49:36.313677 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:49:36.313686 kernel: Key type asymmetric registered Jan 24 00:49:36.313693 kernel: Asymmetric key parser 'x509' registered Jan 24 00:49:36.313700 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:49:36.313707 kernel: io scheduler mq-deadline registered Jan 24 00:49:36.313715 kernel: io scheduler kyber registered Jan 24 00:49:36.313722 kernel: io scheduler bfq registered Jan 24 00:49:36.313733 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:49:36.313741 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:49:36.313748 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:49:36.313755 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:49:36.313763 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:49:36.313770 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:49:36.313777 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:49:36.313785 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:49:36.313792 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:49:36.313957 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:49:36.313970 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:49:36.314085 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:49:36.314199 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:49:35 UTC (1769215775) Jan 24 00:49:36.314583 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:49:36.314597 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:49:36.314604 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:49:36.314621 kernel: Segment Routing with IPv6 Jan 24 00:49:36.314673 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:49:36.314681 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:49:36.314688 kernel: Key type dns_resolver registered Jan 24 00:49:36.314695 kernel: IPI shorthand broadcast: enabled Jan 24 00:49:36.314702 kernel: sched_clock: Marking stable (1320036161, 587196295)->(2422798237, -515565781) Jan 24 00:49:36.314709 kernel: registered taskstats version 1 Jan 24 00:49:36.314716 kernel: Loading compiled-in X.509 certificates Jan 24 00:49:36.314723 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:49:36.314730 kernel: Key type .fscrypt registered Jan 24 00:49:36.314741 kernel: Key type fscrypt-provisioning registered Jan 24 00:49:36.314748 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:49:36.314755 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:49:36.314763 kernel: ima: No architecture policies found Jan 24 00:49:36.314770 kernel: clk: Disabling unused clocks Jan 24 00:49:36.314777 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:49:36.314784 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:49:36.314791 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:49:36.314800 kernel: Run /init as init process Jan 24 00:49:36.314807 kernel: with arguments: Jan 24 00:49:36.314814 kernel: /init Jan 24 00:49:36.314821 kernel: with environment: Jan 24 00:49:36.314827 kernel: HOME=/ Jan 24 00:49:36.314834 kernel: TERM=linux Jan 24 00:49:36.314843 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:49:36.314852 systemd[1]: Detected virtualization kvm. Jan 24 00:49:36.314862 systemd[1]: Detected architecture x86-64. Jan 24 00:49:36.314868 systemd[1]: Running in initrd. Jan 24 00:49:36.314875 systemd[1]: No hostname configured, using default hostname. Jan 24 00:49:36.314882 systemd[1]: Hostname set to . Jan 24 00:49:36.314890 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:49:36.314897 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:49:36.314905 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:49:36.314912 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:49:36.314922 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:49:36.314929 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:49:36.314937 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:49:36.314944 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:49:36.314952 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:49:36.314960 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:49:36.314967 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:49:36.314976 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:49:36.314984 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:49:36.314991 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:49:36.314998 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:49:36.315017 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:49:36.315027 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:49:36.315037 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:49:36.315044 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:49:36.315052 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:49:36.315059 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:49:36.315067 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:49:36.315074 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:49:36.315082 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:49:36.315089 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:49:36.315096 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:49:36.315106 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:49:36.315113 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:49:36.315121 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:49:36.315128 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:49:36.315135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:49:36.315167 systemd-journald[195]: Collecting audit messages is disabled. Jan 24 00:49:36.315189 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:49:36.315197 systemd-journald[195]: Journal started Jan 24 00:49:36.315213 systemd-journald[195]: Runtime Journal (/run/log/journal/6a4da4e6e26c42f99dbf68c72bb26cc0) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:49:36.315326 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:49:36.328705 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:49:36.329464 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:49:36.531881 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:49:36.531925 kernel: Bridge firewalling registered Jan 24 00:49:36.338149 systemd-modules-load[196]: Inserted module 'overlay' Jan 24 00:49:36.372197 systemd-modules-load[196]: Inserted module 'br_netfilter' Jan 24 00:49:36.539846 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:49:36.546041 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:49:36.563938 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:49:36.577747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:49:36.590237 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:49:36.607106 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:49:36.630335 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:49:36.643992 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:49:36.656835 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:49:36.671507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:49:36.697867 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:49:36.704676 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:49:36.728007 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:49:36.747753 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:49:36.757941 dracut-cmdline[226]: dracut-dracut-053 Jan 24 00:49:36.757941 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:49:36.766686 systemd-resolved[228]: Positive Trust Anchors: Jan 24 00:49:36.766698 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:49:36.766725 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:49:36.769484 systemd-resolved[228]: Defaulting to hostname 'linux'. Jan 24 00:49:36.770873 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:49:36.782021 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:49:36.874821 kernel: SCSI subsystem initialized Jan 24 00:49:36.890704 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:49:36.920011 kernel: iscsi: registered transport (tcp) Jan 24 00:49:36.946466 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:49:36.946538 kernel: QLogic iSCSI HBA Driver Jan 24 00:49:37.025802 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:49:37.049914 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:49:37.107334 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:49:37.107916 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:49:37.107936 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:49:37.183752 kernel: raid6: avx2x4 gen() 21021 MB/s Jan 24 00:49:37.202746 kernel: raid6: avx2x2 gen() 19813 MB/s Jan 24 00:49:37.226862 kernel: raid6: avx2x1 gen() 11373 MB/s Jan 24 00:49:37.226942 kernel: raid6: using algorithm avx2x4 gen() 21021 MB/s Jan 24 00:49:37.247723 kernel: raid6: .... xor() 3906 MB/s, rmw enabled Jan 24 00:49:37.247807 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:49:37.272885 kernel: xor: automatically using best checksumming function avx Jan 24 00:49:37.474772 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:49:37.492840 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:49:37.511088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:49:37.533572 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 24 00:49:37.539005 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:49:37.558151 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:49:37.574882 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 24 00:49:37.632270 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:49:37.652728 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:49:37.760872 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:49:37.786767 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:49:37.804728 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:49:37.828809 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:49:37.840727 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:49:37.846237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:49:37.867467 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:49:37.867525 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:49:37.873267 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:49:37.890811 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 24 00:49:37.899698 kernel: libata version 3.00 loaded. Jan 24 00:49:37.900334 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:49:37.900794 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:49:37.934169 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:49:37.934204 kernel: GPT:9289727 != 19775487 Jan 24 00:49:37.934223 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:49:37.934240 kernel: GPT:9289727 != 19775487 Jan 24 00:49:37.934256 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:49:37.934272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:49:37.939001 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:49:37.948882 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:49:37.949338 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:49:37.954147 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:49:37.983777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:49:38.021367 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (465) Jan 24 00:49:38.021448 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (469) Jan 24 00:49:38.003379 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:49:38.037570 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:49:38.037775 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:49:38.038092 kernel: AES CTR mode by8 optimization enabled Jan 24 00:49:38.038112 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:49:38.045802 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:49:38.046196 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:49:38.047276 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:49:38.058636 kernel: scsi host0: ahci Jan 24 00:49:38.058962 kernel: scsi host1: ahci Jan 24 00:49:38.059179 kernel: scsi host2: ahci Jan 24 00:49:38.059473 kernel: scsi host3: ahci Jan 24 00:49:38.059760 kernel: scsi host4: ahci Jan 24 00:49:38.062532 kernel: scsi host5: ahci Jan 24 00:49:38.062831 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 24 00:49:38.066432 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 24 00:49:38.066456 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 24 00:49:38.066467 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 24 00:49:38.066476 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 24 00:49:38.066486 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 24 00:49:38.101903 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:49:38.259811 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:49:38.263602 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 00:49:38.264044 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:49:38.292795 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:49:38.301179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:49:38.320752 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:49:38.320701 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:49:38.331109 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:49:38.331130 disk-uuid[553]: Primary Header is updated. Jan 24 00:49:38.331130 disk-uuid[553]: Secondary Entries is updated. Jan 24 00:49:38.331130 disk-uuid[553]: Secondary Header is updated. Jan 24 00:49:38.340795 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:49:38.386845 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:49:38.386909 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:49:38.385198 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:49:38.436595 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:49:38.436783 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:49:38.436797 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:49:38.436807 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:49:38.436816 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:49:38.436936 kernel: ata3.00: applying bridge limits Jan 24 00:49:38.436958 kernel: ata3.00: configured for UDMA/100 Jan 24 00:49:38.437105 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:49:38.495984 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:49:38.496268 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:49:38.528538 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:49:39.337467 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:49:39.338345 disk-uuid[554]: The operation has completed successfully. Jan 24 00:49:39.372955 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:49:39.373198 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:49:39.427057 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:49:39.440623 sh[596]: Success Jan 24 00:49:39.466517 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:49:39.532809 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:49:39.547977 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:49:39.552010 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:49:39.595255 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:49:39.595513 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:49:39.595560 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:49:39.602656 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:49:39.602823 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:49:39.627235 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:49:39.628313 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:49:39.647747 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:49:39.649225 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:49:39.684285 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:49:39.684520 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:49:39.684537 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:49:39.693747 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:49:39.707833 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:49:39.723984 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:49:39.734858 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:49:39.749792 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:49:39.839296 ignition[708]: Ignition 2.19.0 Jan 24 00:49:39.839492 ignition[708]: Stage: fetch-offline Jan 24 00:49:39.839543 ignition[708]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:39.839557 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:49:39.839739 ignition[708]: parsed url from cmdline: "" Jan 24 00:49:39.839745 ignition[708]: no config URL provided Jan 24 00:49:39.839754 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:49:39.839767 ignition[708]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:49:39.839812 ignition[708]: op(1): [started] loading QEMU firmware config module Jan 24 00:49:39.839820 ignition[708]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:49:39.850892 ignition[708]: op(1): [finished] loading QEMU firmware config module Jan 24 00:49:39.877902 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:49:39.891843 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:49:39.944339 systemd-networkd[786]: lo: Link UP Jan 24 00:49:39.944504 systemd-networkd[786]: lo: Gained carrier Jan 24 00:49:39.947616 systemd-networkd[786]: Enumeration completed Jan 24 00:49:39.948043 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:49:39.949769 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:49:39.949774 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:49:39.982115 ignition[708]: parsing config with SHA512: 7bc262aadc39cfc8adac2c5257ffce1650a6ae2a902ba9de6a16c753455b193f4ac3199de1e95f30c4fc21b7723fdde70d0c06e8983ebdfda014e7163c41a887 Jan 24 00:49:39.951366 systemd-networkd[786]: eth0: Link UP Jan 24 00:49:39.951373 systemd-networkd[786]: eth0: Gained carrier Jan 24 00:49:39.951455 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:49:39.992584 ignition[708]: fetch-offline: fetch-offline passed Jan 24 00:49:39.961070 systemd[1]: Reached target network.target - Network. Jan 24 00:49:39.993625 ignition[708]: Ignition finished successfully Jan 24 00:49:39.989368 unknown[708]: fetched base config from "system" Jan 24 00:49:39.989376 unknown[708]: fetched user config from "qemu" Jan 24 00:49:39.989568 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:49:39.996452 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:49:40.005902 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:49:40.023127 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:49:40.046563 ignition[790]: Ignition 2.19.0 Jan 24 00:49:40.046574 ignition[790]: Stage: kargs Jan 24 00:49:40.051817 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:49:40.046885 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:40.046902 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:49:40.048093 ignition[790]: kargs: kargs passed Jan 24 00:49:40.068719 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:49:40.048159 ignition[790]: Ignition finished successfully Jan 24 00:49:40.094377 ignition[799]: Ignition 2.19.0 Jan 24 00:49:40.094499 ignition[799]: Stage: disks Jan 24 00:49:40.094788 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:40.098866 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:49:40.094804 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:49:40.103223 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:49:40.095968 ignition[799]: disks: disks passed Jan 24 00:49:40.124213 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:49:40.096027 ignition[799]: Ignition finished successfully Jan 24 00:49:40.131303 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:49:40.135480 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:49:40.139644 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:49:40.164933 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:49:40.185344 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:49:40.194548 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:49:40.217625 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:49:40.354547 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:49:40.354848 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:49:40.361320 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:49:40.388652 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:49:40.397524 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:49:40.433454 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Jan 24 00:49:40.433506 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:49:40.433525 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:49:40.433541 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:49:40.433556 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:49:40.410016 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:49:40.410089 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:49:40.410132 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:49:40.456772 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:49:40.462969 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:49:40.489748 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:49:40.553508 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:49:40.565501 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:49:40.576380 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:49:40.583347 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:49:40.735200 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:49:40.753551 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:49:40.767519 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:49:40.769249 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:49:40.780299 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:49:40.814210 ignition[929]: INFO : Ignition 2.19.0 Jan 24 00:49:40.814210 ignition[929]: INFO : Stage: mount Jan 24 00:49:40.824522 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:40.824522 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:49:40.824522 ignition[929]: INFO : mount: mount passed Jan 24 00:49:40.824522 ignition[929]: INFO : Ignition finished successfully Jan 24 00:49:40.814852 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:49:40.819378 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:49:40.848831 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:49:40.859295 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:49:40.877530 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Jan 24 00:49:40.884767 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:49:40.884823 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:49:40.884841 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:49:40.895493 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:49:40.898032 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:49:40.941994 ignition[960]: INFO : Ignition 2.19.0 Jan 24 00:49:40.941994 ignition[960]: INFO : Stage: files Jan 24 00:49:40.947192 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:40.947192 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:49:40.947192 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:49:40.947192 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:49:40.947192 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:49:40.966526 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:49:40.971291 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:49:40.976717 unknown[960]: wrote ssh authorized keys file for user: core Jan 24 00:49:40.980852 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:49:40.980852 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:49:40.980852 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:49:41.026477 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:49:41.153716 systemd-networkd[786]: eth0: Gained IPv6LL Jan 24 00:49:41.180053 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:49:41.180053 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:49:41.194111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:49:41.435988 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:49:42.649741 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:49:42.649741 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:49:42.662314 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:49:42.662314 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:49:42.662314 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:49:42.662314 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:49:42.662314 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:49:42.662314 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:49:42.662314 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:49:42.662314 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:49:42.716042 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:49:42.716042 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:49:42.716042 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:49:42.716042 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:49:42.716042 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:49:42.716042 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:49:42.716042 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:49:42.716042 ignition[960]: INFO : files: files passed Jan 24 00:49:42.716042 ignition[960]: INFO : Ignition finished successfully Jan 24 00:49:42.692122 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:49:42.723878 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:49:42.738253 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:49:42.747504 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:49:42.816048 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:49:42.747756 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:49:42.827996 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:49:42.827996 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:49:42.762055 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:49:42.862769 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:49:42.775335 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:49:42.787853 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:49:42.830558 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:49:42.830812 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:49:42.839584 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:49:42.850183 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:49:42.853855 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:49:42.854864 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:49:42.879222 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:49:42.901759 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:49:42.921512 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:49:42.932468 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:49:42.937210 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:49:42.947055 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:49:42.947223 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:49:42.956883 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:49:42.966629 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:49:42.970656 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:49:42.978682 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:49:42.987097 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:49:42.996283 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:49:43.006231 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:49:43.006618 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:49:43.008594 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:49:43.009288 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:49:43.011186 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:49:43.011313 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:49:43.013076 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:49:43.014165 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:49:43.014812 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:49:43.015293 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:49:43.017462 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:49:43.017561 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:49:43.019116 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:49:43.019251 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:49:43.020306 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:49:43.021324 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:49:43.021651 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:49:43.021937 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:49:43.022509 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:49:43.023079 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:49:43.023268 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:49:43.024135 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:49:43.024251 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:49:43.025225 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:49:43.025348 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:49:43.026333 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:49:43.198789 ignition[1015]: INFO : Ignition 2.19.0 Jan 24 00:49:43.198789 ignition[1015]: INFO : Stage: umount Jan 24 00:49:43.198789 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:43.198789 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:49:43.198789 ignition[1015]: INFO : umount: umount passed Jan 24 00:49:43.198789 ignition[1015]: INFO : Ignition finished successfully Jan 24 00:49:43.026484 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:49:43.074958 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:49:43.076993 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:49:43.077215 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:49:43.084859 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:49:43.091027 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:49:43.091262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:49:43.118105 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:49:43.118256 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:49:43.136292 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:49:43.138156 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:49:43.138267 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:49:43.196792 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:49:43.196968 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:49:43.266228 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:49:43.266567 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:49:43.275469 systemd[1]: Stopped target network.target - Network. Jan 24 00:49:43.275637 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:49:43.275766 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:49:43.280974 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:49:43.281044 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:49:43.289523 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:49:43.289591 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:49:43.295167 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:49:43.295236 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:49:43.304215 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:49:43.304275 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:49:43.307855 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:49:43.316924 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:49:43.336530 systemd-networkd[786]: eth0: DHCPv6 lease lost Jan 24 00:49:43.340622 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:49:43.340911 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:49:43.348793 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:49:43.348860 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:49:43.367619 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:49:43.373768 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:49:43.373912 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:49:43.381566 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:49:43.394665 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:49:43.398505 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:49:43.414838 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:49:43.418320 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:49:43.428674 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:49:43.428885 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:49:43.441016 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:49:43.441148 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:49:43.448774 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:49:43.448823 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:49:43.459171 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:49:43.459270 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:49:43.472305 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:49:43.472481 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:49:43.482172 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:49:43.482270 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:49:43.497868 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:49:43.505021 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:49:43.505168 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:49:43.512267 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:49:43.512343 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:49:43.516361 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:49:43.516517 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:49:43.522783 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:49:43.522860 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:49:43.539593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:49:43.539683 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:49:43.551025 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:49:43.551218 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:49:43.559197 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:49:43.583762 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:49:43.593817 systemd[1]: Switching root. Jan 24 00:49:43.625588 systemd-journald[195]: Journal stopped Jan 24 00:49:45.105314 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jan 24 00:49:45.105378 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:49:45.105451 kernel: SELinux: policy capability open_perms=1 Jan 24 00:49:45.105464 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:49:45.105479 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:49:45.105490 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:49:45.105500 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:49:45.105510 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:49:45.105524 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:49:45.105534 kernel: audit: type=1403 audit(1769215783.814:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:49:45.105545 systemd[1]: Successfully loaded SELinux policy in 56.481ms. Jan 24 00:49:45.105569 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.791ms. Jan 24 00:49:45.105581 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:49:45.105592 systemd[1]: Detected virtualization kvm. Jan 24 00:49:45.105603 systemd[1]: Detected architecture x86-64. Jan 24 00:49:45.105613 systemd[1]: Detected first boot. Jan 24 00:49:45.105624 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:49:45.105638 zram_generator::config[1057]: No configuration found. Jan 24 00:49:45.105651 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:49:45.105662 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:49:45.105682 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:49:45.105693 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:49:45.105704 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:49:45.105760 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:49:45.105778 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:49:45.105794 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:49:45.105804 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:49:45.105815 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:49:45.105826 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:49:45.105837 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:49:45.105848 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:49:45.105859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:49:45.105870 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:49:45.105880 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:49:45.105893 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:49:45.105905 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:49:45.105916 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:49:45.105926 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:49:45.105937 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:49:45.105949 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:49:45.105960 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:49:45.105973 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:49:45.105984 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:49:45.105995 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:49:45.106006 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:49:45.106016 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:49:45.106027 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:49:45.106038 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:49:45.106048 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:49:45.106059 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:49:45.106072 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:49:45.106083 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:49:45.106093 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:49:45.106104 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:49:45.106115 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:49:45.106126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:49:45.106137 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:49:45.106147 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:49:45.106158 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:49:45.106172 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:49:45.106184 systemd[1]: Reached target machines.target - Containers. Jan 24 00:49:45.106195 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:49:45.106205 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:49:45.106216 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:49:45.106227 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:49:45.106238 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:49:45.106248 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:49:45.106262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:49:45.106272 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:49:45.106283 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:49:45.106294 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:49:45.106305 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:49:45.106315 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:49:45.106326 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:49:45.106336 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:49:45.106347 kernel: ACPI: bus type drm_connector registered Jan 24 00:49:45.106365 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:49:45.106376 kernel: fuse: init (API version 7.39) Jan 24 00:49:45.106484 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:49:45.106497 kernel: loop: module loaded Jan 24 00:49:45.106551 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:49:45.106583 systemd-journald[1141]: Collecting audit messages is disabled. Jan 24 00:49:45.106604 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:49:45.106615 systemd-journald[1141]: Journal started Jan 24 00:49:45.106638 systemd-journald[1141]: Runtime Journal (/run/log/journal/6a4da4e6e26c42f99dbf68c72bb26cc0) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:49:44.544698 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:49:44.572921 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:49:44.573882 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:49:44.574494 systemd[1]: systemd-journald.service: Consumed 1.547s CPU time. Jan 24 00:49:45.121174 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:49:45.121227 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:49:45.126162 systemd[1]: Stopped verity-setup.service. Jan 24 00:49:45.128895 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:49:45.142810 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:49:45.143695 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:49:45.147382 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:49:45.151378 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:49:45.154874 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:49:45.158630 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:49:45.162274 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:49:45.165830 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:49:45.169919 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:49:45.174252 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:49:45.174650 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:49:45.178901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:49:45.179177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:49:45.183580 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:49:45.183860 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:49:45.188956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:49:45.189188 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:49:45.194698 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:49:45.194958 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:49:45.199240 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:49:45.199542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:49:45.204220 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:49:45.210694 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:49:45.216989 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:49:45.237070 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:49:45.250863 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:49:45.256807 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:49:45.261339 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:49:45.261479 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:49:45.266993 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:49:45.273098 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:49:45.281497 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:49:45.285594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:49:45.288042 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:49:45.293167 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:49:45.297243 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:49:45.304833 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:49:45.309803 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:49:45.314884 systemd-journald[1141]: Time spent on flushing to /var/log/journal/6a4da4e6e26c42f99dbf68c72bb26cc0 is 12.902ms for 939 entries. Jan 24 00:49:45.314884 systemd-journald[1141]: System Journal (/var/log/journal/6a4da4e6e26c42f99dbf68c72bb26cc0) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:49:45.339690 systemd-journald[1141]: Received client request to flush runtime journal. Jan 24 00:49:45.319355 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:49:45.327635 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:49:45.335681 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:49:45.339580 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:49:45.339986 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:49:45.340565 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:49:45.341219 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:49:45.344247 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:49:45.362204 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:49:45.367144 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:49:45.379033 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:49:45.384651 kernel: loop0: detected capacity change from 0 to 224512 Jan 24 00:49:45.389450 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:49:45.398966 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:49:45.414801 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 24 00:49:45.417599 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:49:45.427445 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:49:45.427212 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:49:45.445935 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:49:45.464102 kernel: loop1: detected capacity change from 0 to 142488 Jan 24 00:49:45.462021 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:49:45.491373 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 24 00:49:45.491441 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 24 00:49:45.499172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:49:45.514456 kernel: loop2: detected capacity change from 0 to 140768 Jan 24 00:49:45.599581 kernel: loop3: detected capacity change from 0 to 224512 Jan 24 00:49:45.616554 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 00:49:45.630493 kernel: loop5: detected capacity change from 0 to 140768 Jan 24 00:49:45.639316 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 24 00:49:45.640071 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 24 00:49:45.646358 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:49:45.646372 systemd[1]: Reloading... Jan 24 00:49:45.722520 zram_generator::config[1220]: No configuration found. Jan 24 00:49:45.954633 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:49:45.969317 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:49:46.027198 systemd[1]: Reloading finished in 380 ms. Jan 24 00:49:46.061707 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:49:46.066253 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:49:46.070782 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:49:46.093860 systemd[1]: Starting ensure-sysext.service... Jan 24 00:49:46.099965 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:49:46.108155 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:49:46.117354 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:49:46.117494 systemd[1]: Reloading... Jan 24 00:49:46.128054 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:49:46.128547 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:49:46.129646 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:49:46.130062 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 24 00:49:46.130173 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 24 00:49:46.134093 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:49:46.134128 systemd-tmpfiles[1262]: Skipping /boot Jan 24 00:49:46.142640 systemd-udevd[1263]: Using default interface naming scheme 'v255'. Jan 24 00:49:46.146320 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:49:46.146512 systemd-tmpfiles[1262]: Skipping /boot Jan 24 00:49:46.199471 zram_generator::config[1303]: No configuration found. Jan 24 00:49:46.245523 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1288) Jan 24 00:49:46.296588 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:49:46.305513 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:49:46.334473 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:49:46.334789 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:49:46.334982 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:49:46.334997 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:49:46.331807 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:49:46.371527 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:49:46.397349 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:49:46.397796 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:49:46.408315 systemd[1]: Reloading finished in 290 ms. Jan 24 00:49:46.484357 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:49:46.507493 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:49:46.531002 kernel: kvm_amd: TSC scaling supported Jan 24 00:49:46.531047 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:49:46.531062 kernel: kvm_amd: Nested Paging enabled Jan 24 00:49:46.534584 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:49:46.534608 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:49:46.551646 systemd[1]: Finished ensure-sysext.service. Jan 24 00:49:46.603776 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:49:46.605490 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:49:46.619682 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:49:46.625542 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:49:46.629874 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:49:46.632613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:49:46.637564 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:49:46.642487 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:49:46.647934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:49:46.653344 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:49:46.657598 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:49:46.664144 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:49:46.672625 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:49:46.676812 augenrules[1383]: No rules Jan 24 00:49:46.687668 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:49:46.694135 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:49:46.699699 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:49:46.705346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:49:46.709627 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:49:46.711174 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:49:46.716910 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:49:46.722081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:49:46.722597 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:49:46.727673 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:49:46.727899 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:49:46.733179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:49:46.733366 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:49:46.738349 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:49:46.738801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:49:46.743882 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:49:46.749319 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:49:46.755101 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:49:46.776660 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:49:46.781806 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:49:46.781879 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:49:46.783717 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:49:46.788320 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:49:46.977195 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:49:46.982292 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:49:46.983841 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:49:46.991560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:49:46.998446 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:49:47.005639 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:49:47.018324 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:49:47.036677 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:49:47.041549 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:49:47.051359 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:49:47.089959 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:49:47.100164 systemd-networkd[1384]: lo: Link UP Jan 24 00:49:47.100204 systemd-networkd[1384]: lo: Gained carrier Jan 24 00:49:47.102323 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:49:47.102599 systemd-networkd[1384]: Enumeration completed Jan 24 00:49:47.103830 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:49:47.103909 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:49:47.105462 systemd-networkd[1384]: eth0: Link UP Jan 24 00:49:47.105496 systemd-networkd[1384]: eth0: Gained carrier Jan 24 00:49:47.105512 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:49:47.106871 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:49:47.111013 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:49:47.120071 systemd-resolved[1390]: Positive Trust Anchors: Jan 24 00:49:47.120117 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:49:47.120144 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:49:47.120665 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:49:47.131293 systemd-resolved[1390]: Defaulting to hostname 'linux'. Jan 24 00:49:47.133979 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:49:47.134509 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:49:47.135692 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Jan 24 00:49:47.137364 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:49:47.137510 systemd-timesyncd[1391]: Initial clock synchronization to Sat 2026-01-24 00:49:47.083084 UTC. Jan 24 00:49:47.138912 systemd[1]: Reached target network.target - Network. Jan 24 00:49:47.142219 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:49:47.146556 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:49:47.150523 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:49:47.155078 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:49:47.159928 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:49:47.164099 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:49:47.168630 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:49:47.173174 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:49:47.173249 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:49:47.176546 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:49:47.181076 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:49:47.187622 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:49:47.198017 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:49:47.202142 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:49:47.205852 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:49:47.209271 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:49:47.212448 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:49:47.212485 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:49:47.213998 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:49:47.218978 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:49:47.223459 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:49:47.228284 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:49:47.231659 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:49:47.233466 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:49:47.238587 jq[1429]: false Jan 24 00:49:47.253517 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:49:47.260586 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:49:47.267786 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:49:47.275616 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:49:47.279520 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:49:47.280203 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:49:47.282612 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:49:47.288060 extend-filesystems[1430]: Found loop3 Jan 24 00:49:47.290929 extend-filesystems[1430]: Found loop4 Jan 24 00:49:47.290929 extend-filesystems[1430]: Found loop5 Jan 24 00:49:47.290929 extend-filesystems[1430]: Found sr0 Jan 24 00:49:47.290929 extend-filesystems[1430]: Found vda Jan 24 00:49:47.290929 extend-filesystems[1430]: Found vda1 Jan 24 00:49:47.290929 extend-filesystems[1430]: Found vda2 Jan 24 00:49:47.290929 extend-filesystems[1430]: Found vda3 Jan 24 00:49:47.290929 extend-filesystems[1430]: Found usr Jan 24 00:49:47.290929 extend-filesystems[1430]: Found vda4 Jan 24 00:49:47.290929 extend-filesystems[1430]: Found vda6 Jan 24 00:49:47.290929 extend-filesystems[1430]: Found vda7 Jan 24 00:49:47.290929 extend-filesystems[1430]: Found vda9 Jan 24 00:49:47.290929 extend-filesystems[1430]: Checking size of /dev/vda9 Jan 24 00:49:47.344693 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1288) Jan 24 00:49:47.319846 dbus-daemon[1428]: [system] SELinux support is enabled Jan 24 00:49:47.293561 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:49:47.348560 update_engine[1442]: I20260124 00:49:47.309226 1442 main.cc:92] Flatcar Update Engine starting Jan 24 00:49:47.348560 update_engine[1442]: I20260124 00:49:47.326167 1442 update_check_scheduler.cc:74] Next update check in 11m31s Jan 24 00:49:47.348820 extend-filesystems[1430]: Resized partition /dev/vda9 Jan 24 00:49:47.359359 jq[1446]: true Jan 24 00:49:47.307535 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:49:47.307794 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:49:47.308146 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:49:47.308321 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:49:47.320698 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:49:47.347976 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:49:47.348220 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:49:47.360314 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:49:47.377573 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 24 00:49:47.383096 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:49:47.383669 tar[1452]: linux-amd64/LICENSE Jan 24 00:49:47.393807 tar[1452]: linux-amd64/helm Jan 24 00:49:47.385080 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:49:47.385111 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:49:47.387087 systemd-logind[1441]: New seat seat0. Jan 24 00:49:47.394242 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:49:47.395230 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:49:47.401274 jq[1454]: true Jan 24 00:49:47.402649 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:49:47.402873 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:49:47.407873 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:49:47.408063 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:49:47.419535 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:49:47.492479 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 24 00:49:47.495962 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:49:47.515720 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:49:47.515720 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:49:47.515720 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 24 00:49:47.528791 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Jan 24 00:49:47.525797 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:49:47.526065 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:49:47.539250 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:49:47.542151 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:49:47.549713 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:49:47.578142 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:49:47.620726 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:49:47.625590 containerd[1457]: time="2026-01-24T00:49:47.622968893Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:49:47.633778 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:49:47.647891 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:49:47.648490 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:49:47.661376 containerd[1457]: time="2026-01-24T00:49:47.661245783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:49:47.661871 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:49:47.668168 containerd[1457]: time="2026-01-24T00:49:47.668116222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:49:47.668168 containerd[1457]: time="2026-01-24T00:49:47.668166887Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:49:47.668242 containerd[1457]: time="2026-01-24T00:49:47.668183318Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:49:47.668484 containerd[1457]: time="2026-01-24T00:49:47.668376988Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:49:47.668530 containerd[1457]: time="2026-01-24T00:49:47.668488207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:49:47.668604 containerd[1457]: time="2026-01-24T00:49:47.668563577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:49:47.668604 containerd[1457]: time="2026-01-24T00:49:47.668601678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:49:47.669020 containerd[1457]: time="2026-01-24T00:49:47.668955018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:49:47.669020 containerd[1457]: time="2026-01-24T00:49:47.669010792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:49:47.669067 containerd[1457]: time="2026-01-24T00:49:47.669026280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:49:47.669067 containerd[1457]: time="2026-01-24T00:49:47.669036810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:49:47.669172 containerd[1457]: time="2026-01-24T00:49:47.669132519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:49:47.669542 containerd[1457]: time="2026-01-24T00:49:47.669506367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:49:47.669712 containerd[1457]: time="2026-01-24T00:49:47.669671265Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:49:47.669777 containerd[1457]: time="2026-01-24T00:49:47.669729473Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:49:47.669947 containerd[1457]: time="2026-01-24T00:49:47.669897346Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:49:47.670048 containerd[1457]: time="2026-01-24T00:49:47.670002072Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:49:47.674645 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:49:47.676982 containerd[1457]: time="2026-01-24T00:49:47.676948868Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:49:47.677071 containerd[1457]: time="2026-01-24T00:49:47.677057852Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:49:47.677171 containerd[1457]: time="2026-01-24T00:49:47.677158800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:49:47.677248 containerd[1457]: time="2026-01-24T00:49:47.677234292Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677285918Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677513512Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677704930Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677854869Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677870348Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677881900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677894373Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677907167Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677918148Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677931633Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677946390Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677958543Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677970615Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:49:47.679541 containerd[1457]: time="2026-01-24T00:49:47.677981135Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.677998788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678011522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678022613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678039595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678051146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678065332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678075983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678089818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678101580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678114594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678125805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678135643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678149279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678163966Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:49:47.680058 containerd[1457]: time="2026-01-24T00:49:47.678182191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680843 containerd[1457]: time="2026-01-24T00:49:47.678192750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680843 containerd[1457]: time="2026-01-24T00:49:47.678202228Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:49:47.680843 containerd[1457]: time="2026-01-24T00:49:47.678243415Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:49:47.680843 containerd[1457]: time="2026-01-24T00:49:47.678260717Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:49:47.680843 containerd[1457]: time="2026-01-24T00:49:47.678270916Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:49:47.680843 containerd[1457]: time="2026-01-24T00:49:47.678285053Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:49:47.680843 containerd[1457]: time="2026-01-24T00:49:47.678293558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.680843 containerd[1457]: time="2026-01-24T00:49:47.678304619Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:49:47.680843 containerd[1457]: time="2026-01-24T00:49:47.678319577Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:49:47.680843 containerd[1457]: time="2026-01-24T00:49:47.678328935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:49:47.681154 containerd[1457]: time="2026-01-24T00:49:47.678627332Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:49:47.681154 containerd[1457]: time="2026-01-24T00:49:47.678677605Z" level=info msg="Connect containerd service" Jan 24 00:49:47.681154 containerd[1457]: time="2026-01-24T00:49:47.678715827Z" level=info msg="using legacy CRI server" Jan 24 00:49:47.681154 containerd[1457]: time="2026-01-24T00:49:47.678722149Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:49:47.681154 containerd[1457]: time="2026-01-24T00:49:47.678865506Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:49:47.682116 containerd[1457]: time="2026-01-24T00:49:47.682095495Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:49:47.682319 containerd[1457]: time="2026-01-24T00:49:47.682290669Z" level=info msg="Start subscribing containerd event" Jan 24 00:49:47.682457 containerd[1457]: time="2026-01-24T00:49:47.682442022Z" level=info msg="Start recovering state" Jan 24 00:49:47.682588 containerd[1457]: time="2026-01-24T00:49:47.682574349Z" level=info msg="Start event monitor" Jan 24 00:49:47.682634 containerd[1457]: time="2026-01-24T00:49:47.682623921Z" level=info msg="Start snapshots syncer" Jan 24 00:49:47.682673 containerd[1457]: time="2026-01-24T00:49:47.682663235Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:49:47.682714 containerd[1457]: time="2026-01-24T00:49:47.682703981Z" level=info msg="Start streaming server" Jan 24 00:49:47.683305 containerd[1457]: time="2026-01-24T00:49:47.683287871Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:49:47.683515 containerd[1457]: time="2026-01-24T00:49:47.683500948Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:49:47.684506 containerd[1457]: time="2026-01-24T00:49:47.684330633Z" level=info msg="containerd successfully booted in 0.062547s" Jan 24 00:49:47.693997 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:49:47.701070 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:49:47.706975 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:49:47.713133 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:49:47.893280 tar[1452]: linux-amd64/README.md Jan 24 00:49:47.915222 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:49:49.089790 systemd-networkd[1384]: eth0: Gained IPv6LL Jan 24 00:49:49.094059 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:49:49.102906 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:49:49.117772 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:49:49.124649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:49:49.129990 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:49:49.162703 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:49:49.187176 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:49:49.187717 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:49:49.193201 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:49:50.035957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:49:50.043166 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:49:50.043499 (kubelet)[1540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:49:50.050545 systemd[1]: Startup finished in 1.525s (kernel) + 7.992s (initrd) + 6.291s (userspace) = 15.809s. Jan 24 00:49:50.588181 kubelet[1540]: E0124 00:49:50.588080 1540 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:49:50.592608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:49:50.593001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:49:50.593606 systemd[1]: kubelet.service: Consumed 1.112s CPU time. Jan 24 00:49:50.696718 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:49:50.698745 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:38346.service - OpenSSH per-connection server daemon (10.0.0.1:38346). Jan 24 00:49:50.763690 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 38346 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:49:50.766217 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:50.779546 systemd-logind[1441]: New session 1 of user core. Jan 24 00:49:50.781236 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:49:50.799994 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:49:50.818496 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:49:50.831030 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:49:50.835913 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:49:50.957097 systemd[1557]: Queued start job for default target default.target. Jan 24 00:49:50.966903 systemd[1557]: Created slice app.slice - User Application Slice. Jan 24 00:49:50.966960 systemd[1557]: Reached target paths.target - Paths. Jan 24 00:49:50.966973 systemd[1557]: Reached target timers.target - Timers. Jan 24 00:49:50.968973 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:49:50.984837 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:49:50.985034 systemd[1557]: Reached target sockets.target - Sockets. Jan 24 00:49:50.985078 systemd[1557]: Reached target basic.target - Basic System. Jan 24 00:49:50.985116 systemd[1557]: Reached target default.target - Main User Target. Jan 24 00:49:50.985153 systemd[1557]: Startup finished in 137ms. Jan 24 00:49:50.985767 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:49:50.988530 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:49:51.051921 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:38350.service - OpenSSH per-connection server daemon (10.0.0.1:38350). Jan 24 00:49:51.134652 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 38350 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:49:51.136836 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:51.143758 systemd-logind[1441]: New session 2 of user core. Jan 24 00:49:51.159791 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:49:51.219942 sshd[1568]: pam_unix(sshd:session): session closed for user core Jan 24 00:49:51.228338 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:38350.service: Deactivated successfully. Jan 24 00:49:51.230540 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:49:51.232250 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:49:51.243900 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:38360.service - OpenSSH per-connection server daemon (10.0.0.1:38360). Jan 24 00:49:51.245121 systemd-logind[1441]: Removed session 2. Jan 24 00:49:51.285209 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 38360 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:49:51.287149 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:51.293526 systemd-logind[1441]: New session 3 of user core. Jan 24 00:49:51.304659 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:49:51.359353 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 24 00:49:51.372783 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:38360.service: Deactivated successfully. Jan 24 00:49:51.374733 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:49:51.376310 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:49:51.391955 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:38362.service - OpenSSH per-connection server daemon (10.0.0.1:38362). Jan 24 00:49:51.393356 systemd-logind[1441]: Removed session 3. Jan 24 00:49:51.433706 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 38362 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:49:51.435618 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:51.441567 systemd-logind[1441]: New session 4 of user core. Jan 24 00:49:51.455642 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:49:51.515796 sshd[1582]: pam_unix(sshd:session): session closed for user core Jan 24 00:49:51.534460 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:38362.service: Deactivated successfully. Jan 24 00:49:51.536252 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:49:51.537843 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:49:51.548796 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:38368.service - OpenSSH per-connection server daemon (10.0.0.1:38368). Jan 24 00:49:51.550020 systemd-logind[1441]: Removed session 4. Jan 24 00:49:51.588493 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 38368 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:49:51.590136 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:51.595820 systemd-logind[1441]: New session 5 of user core. Jan 24 00:49:51.614640 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:49:51.680716 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:49:51.681109 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:49:52.009715 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:49:52.009898 (dockerd)[1611]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:49:52.292531 dockerd[1611]: time="2026-01-24T00:49:52.291901183Z" level=info msg="Starting up" Jan 24 00:49:52.583530 dockerd[1611]: time="2026-01-24T00:49:52.583314128Z" level=info msg="Loading containers: start." Jan 24 00:49:52.757515 kernel: Initializing XFRM netlink socket Jan 24 00:49:52.860173 systemd-networkd[1384]: docker0: Link UP Jan 24 00:49:52.890844 dockerd[1611]: time="2026-01-24T00:49:52.890771816Z" level=info msg="Loading containers: done." Jan 24 00:49:52.909098 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3982697979-merged.mount: Deactivated successfully. Jan 24 00:49:52.911096 dockerd[1611]: time="2026-01-24T00:49:52.911031131Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:49:52.911216 dockerd[1611]: time="2026-01-24T00:49:52.911157554Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:49:52.911443 dockerd[1611]: time="2026-01-24T00:49:52.911321677Z" level=info msg="Daemon has completed initialization" Jan 24 00:49:52.965985 dockerd[1611]: time="2026-01-24T00:49:52.965789180Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:49:52.966055 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:49:53.804699 containerd[1457]: time="2026-01-24T00:49:53.804629872Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:49:54.476961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1370547943.mount: Deactivated successfully. Jan 24 00:49:55.735871 containerd[1457]: time="2026-01-24T00:49:55.735730035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:55.736562 containerd[1457]: time="2026-01-24T00:49:55.736530558Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 24 00:49:55.738578 containerd[1457]: time="2026-01-24T00:49:55.738490265Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:55.742318 containerd[1457]: time="2026-01-24T00:49:55.742229001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:55.743536 containerd[1457]: time="2026-01-24T00:49:55.743487830Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.938779108s" Jan 24 00:49:55.743578 containerd[1457]: time="2026-01-24T00:49:55.743565259Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:49:55.745113 containerd[1457]: time="2026-01-24T00:49:55.744903650Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:49:57.205824 containerd[1457]: time="2026-01-24T00:49:57.205719763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:57.206771 containerd[1457]: time="2026-01-24T00:49:57.206723389Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 24 00:49:57.207978 containerd[1457]: time="2026-01-24T00:49:57.207922173Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:57.211297 containerd[1457]: time="2026-01-24T00:49:57.211227299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:57.212548 containerd[1457]: time="2026-01-24T00:49:57.212476878Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.467544448s" Jan 24 00:49:57.212548 containerd[1457]: time="2026-01-24T00:49:57.212526100Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:49:57.213272 containerd[1457]: time="2026-01-24T00:49:57.213208147Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:49:58.525455 containerd[1457]: time="2026-01-24T00:49:58.525248656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:58.526781 containerd[1457]: time="2026-01-24T00:49:58.526702725Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 24 00:49:58.528284 containerd[1457]: time="2026-01-24T00:49:58.528218166Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:58.533571 containerd[1457]: time="2026-01-24T00:49:58.532512446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:58.537458 containerd[1457]: time="2026-01-24T00:49:58.537327018Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.324053881s" Jan 24 00:49:58.537458 containerd[1457]: time="2026-01-24T00:49:58.537441497Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:49:58.538338 containerd[1457]: time="2026-01-24T00:49:58.538246990Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:49:59.579366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1403295822.mount: Deactivated successfully. Jan 24 00:50:00.141812 containerd[1457]: time="2026-01-24T00:50:00.141661643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:00.142924 containerd[1457]: time="2026-01-24T00:50:00.142863557Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 24 00:50:00.144566 containerd[1457]: time="2026-01-24T00:50:00.144455080Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:00.147606 containerd[1457]: time="2026-01-24T00:50:00.147505916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:00.148717 containerd[1457]: time="2026-01-24T00:50:00.148628060Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.61031232s" Jan 24 00:50:00.148717 containerd[1457]: time="2026-01-24T00:50:00.148706619Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:50:00.149561 containerd[1457]: time="2026-01-24T00:50:00.149504686Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:50:00.648093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:50:00.657831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:50:00.839035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:00.844880 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:50:00.864857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1690027530.mount: Deactivated successfully. Jan 24 00:50:00.906770 kubelet[1845]: E0124 00:50:00.906632 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:50:00.913371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:50:00.913753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:50:01.665712 containerd[1457]: time="2026-01-24T00:50:01.665617425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:01.666870 containerd[1457]: time="2026-01-24T00:50:01.666782494Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 24 00:50:01.668198 containerd[1457]: time="2026-01-24T00:50:01.668079297Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:01.671254 containerd[1457]: time="2026-01-24T00:50:01.671178189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:01.672255 containerd[1457]: time="2026-01-24T00:50:01.672189671Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.522635439s" Jan 24 00:50:01.672255 containerd[1457]: time="2026-01-24T00:50:01.672240565Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:50:01.673360 containerd[1457]: time="2026-01-24T00:50:01.673279474Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:50:02.087029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1296145601.mount: Deactivated successfully. Jan 24 00:50:02.094071 containerd[1457]: time="2026-01-24T00:50:02.094003148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:02.095244 containerd[1457]: time="2026-01-24T00:50:02.095174981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 24 00:50:02.096589 containerd[1457]: time="2026-01-24T00:50:02.096480026Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:02.099107 containerd[1457]: time="2026-01-24T00:50:02.098987969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:02.100611 containerd[1457]: time="2026-01-24T00:50:02.100504514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 427.141372ms" Jan 24 00:50:02.100611 containerd[1457]: time="2026-01-24T00:50:02.100554586Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:50:02.101176 containerd[1457]: time="2026-01-24T00:50:02.101126363Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:50:02.595856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227729508.mount: Deactivated successfully. Jan 24 00:50:04.688503 containerd[1457]: time="2026-01-24T00:50:04.688314258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:04.689841 containerd[1457]: time="2026-01-24T00:50:04.689664788Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 24 00:50:04.691533 containerd[1457]: time="2026-01-24T00:50:04.691328383Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:04.695508 containerd[1457]: time="2026-01-24T00:50:04.695253934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:04.697346 containerd[1457]: time="2026-01-24T00:50:04.697219702Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.596030974s" Jan 24 00:50:04.697346 containerd[1457]: time="2026-01-24T00:50:04.697299037Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:50:06.702298 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:06.711678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:50:06.741814 systemd[1]: Reloading requested from client PID 1994 ('systemctl') (unit session-5.scope)... Jan 24 00:50:06.741859 systemd[1]: Reloading... Jan 24 00:50:06.827477 zram_generator::config[2033]: No configuration found. Jan 24 00:50:06.941180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:50:07.014524 systemd[1]: Reloading finished in 272 ms. Jan 24 00:50:07.074743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:07.078873 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:50:07.080724 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:50:07.081168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:07.091754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:50:07.252500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:07.260655 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:50:07.317574 kubelet[2083]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:50:07.317574 kubelet[2083]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:50:07.317574 kubelet[2083]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:50:07.317989 kubelet[2083]: I0124 00:50:07.317579 2083 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:50:07.517808 kubelet[2083]: I0124 00:50:07.517676 2083 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:50:07.517808 kubelet[2083]: I0124 00:50:07.517722 2083 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:50:07.517982 kubelet[2083]: I0124 00:50:07.517943 2083 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:50:07.536775 kubelet[2083]: E0124 00:50:07.536689 2083 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:50:07.537196 kubelet[2083]: I0124 00:50:07.537073 2083 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:50:07.544494 kubelet[2083]: E0124 00:50:07.544451 2083 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:50:07.544494 kubelet[2083]: I0124 00:50:07.544493 2083 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:50:07.550719 kubelet[2083]: I0124 00:50:07.550559 2083 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:50:07.551763 kubelet[2083]: I0124 00:50:07.551695 2083 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:50:07.551963 kubelet[2083]: I0124 00:50:07.551751 2083 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:50:07.551963 kubelet[2083]: I0124 00:50:07.551961 2083 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:50:07.552104 kubelet[2083]: I0124 00:50:07.551971 2083 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:50:07.552104 kubelet[2083]: I0124 00:50:07.552093 2083 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:50:07.555990 kubelet[2083]: I0124 00:50:07.555849 2083 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:50:07.555990 kubelet[2083]: I0124 00:50:07.555898 2083 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:50:07.555990 kubelet[2083]: I0124 00:50:07.555917 2083 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:50:07.555990 kubelet[2083]: I0124 00:50:07.555928 2083 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:50:07.559013 kubelet[2083]: I0124 00:50:07.558941 2083 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:50:07.559513 kubelet[2083]: I0124 00:50:07.559485 2083 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:50:07.559690 kubelet[2083]: W0124 00:50:07.559620 2083 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:50:07.560685 kubelet[2083]: W0124 00:50:07.560202 2083 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jan 24 00:50:07.560685 kubelet[2083]: E0124 00:50:07.560256 2083 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:50:07.560685 kubelet[2083]: W0124 00:50:07.560626 2083 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jan 24 00:50:07.560685 kubelet[2083]: E0124 00:50:07.560663 2083 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:50:07.561484 kubelet[2083]: I0124 00:50:07.561332 2083 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:50:07.561484 kubelet[2083]: I0124 00:50:07.561470 2083 server.go:1287] "Started kubelet" Jan 24 00:50:07.563492 kubelet[2083]: I0124 00:50:07.562828 2083 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:50:07.564982 kubelet[2083]: I0124 00:50:07.564938 2083 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:50:07.565869 kubelet[2083]: I0124 00:50:07.565833 2083 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:50:07.567160 kubelet[2083]: E0124 00:50:07.564567 2083 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d84673cb38c0b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:50:07.561370635 +0000 UTC m=+0.295038803,LastTimestamp:2026-01-24 00:50:07.561370635 +0000 UTC m=+0.295038803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:50:07.567264 kubelet[2083]: E0124 00:50:07.567232 2083 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:50:07.568657 kubelet[2083]: E0124 00:50:07.567338 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:07.568657 kubelet[2083]: I0124 00:50:07.567365 2083 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:50:07.568657 kubelet[2083]: I0124 00:50:07.567457 2083 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:50:07.568657 kubelet[2083]: I0124 00:50:07.567622 2083 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:50:07.568657 kubelet[2083]: I0124 00:50:07.567658 2083 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:50:07.568657 kubelet[2083]: I0124 00:50:07.567718 2083 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:50:07.568657 kubelet[2083]: I0124 00:50:07.567773 2083 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:50:07.568657 kubelet[2083]: W0124 00:50:07.567881 2083 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jan 24 00:50:07.568657 kubelet[2083]: E0124 00:50:07.567914 2083 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:50:07.568657 kubelet[2083]: E0124 00:50:07.567988 2083 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" Jan 24 00:50:07.570099 kubelet[2083]: I0124 00:50:07.569924 2083 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:50:07.570099 kubelet[2083]: I0124 00:50:07.570000 2083 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:50:07.571458 kubelet[2083]: I0124 00:50:07.571356 2083 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:50:07.589144 kubelet[2083]: I0124 00:50:07.589060 2083 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:50:07.589242 kubelet[2083]: I0124 00:50:07.589201 2083 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:50:07.589242 kubelet[2083]: I0124 00:50:07.589211 2083 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:50:07.589242 kubelet[2083]: I0124 00:50:07.589226 2083 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:50:07.591145 kubelet[2083]: I0124 00:50:07.591091 2083 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:50:07.591200 kubelet[2083]: I0124 00:50:07.591154 2083 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:50:07.591200 kubelet[2083]: I0124 00:50:07.591173 2083 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:50:07.591200 kubelet[2083]: I0124 00:50:07.591181 2083 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:50:07.591267 kubelet[2083]: E0124 00:50:07.591224 2083 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:50:07.668700 kubelet[2083]: E0124 00:50:07.668573 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:07.685554 kubelet[2083]: W0124 00:50:07.685360 2083 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jan 24 00:50:07.685649 kubelet[2083]: E0124 00:50:07.685560 2083 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:50:07.685649 kubelet[2083]: I0124 00:50:07.685630 2083 policy_none.go:49] "None policy: Start" Jan 24 00:50:07.685649 kubelet[2083]: I0124 00:50:07.685648 2083 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:50:07.685733 kubelet[2083]: I0124 00:50:07.685669 2083 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:50:07.692204 kubelet[2083]: E0124 00:50:07.692175 2083 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 24 00:50:07.693360 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:50:07.711861 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:50:07.717246 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:50:07.727891 kubelet[2083]: I0124 00:50:07.727811 2083 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:50:07.728279 kubelet[2083]: I0124 00:50:07.728186 2083 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:50:07.728279 kubelet[2083]: I0124 00:50:07.728207 2083 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:50:07.728942 kubelet[2083]: I0124 00:50:07.728691 2083 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:50:07.729802 kubelet[2083]: E0124 00:50:07.729751 2083 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:50:07.729852 kubelet[2083]: E0124 00:50:07.729824 2083 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:50:07.769530 kubelet[2083]: E0124 00:50:07.769341 2083 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" Jan 24 00:50:07.831087 kubelet[2083]: I0124 00:50:07.830733 2083 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:50:07.831354 kubelet[2083]: E0124 00:50:07.831284 2083 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Jan 24 00:50:07.903372 systemd[1]: Created slice kubepods-burstable-pod7a4cf9fac59edd34e2be50d03a80d36b.slice - libcontainer container kubepods-burstable-pod7a4cf9fac59edd34e2be50d03a80d36b.slice. Jan 24 00:50:07.932682 kubelet[2083]: E0124 00:50:07.932614 2083 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:50:07.937020 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 24 00:50:07.939273 kubelet[2083]: E0124 00:50:07.939105 2083 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:50:07.941696 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 24 00:50:07.943610 kubelet[2083]: E0124 00:50:07.943537 2083 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:50:07.973742 kubelet[2083]: I0124 00:50:07.973557 2083 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a4cf9fac59edd34e2be50d03a80d36b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a4cf9fac59edd34e2be50d03a80d36b\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:50:07.973742 kubelet[2083]: I0124 00:50:07.973636 2083 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:07.973742 kubelet[2083]: I0124 00:50:07.973672 2083 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:07.973742 kubelet[2083]: I0124 00:50:07.973695 2083 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:07.973742 kubelet[2083]: I0124 00:50:07.973716 2083 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:07.973977 kubelet[2083]: I0124 00:50:07.973737 2083 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:50:07.973977 kubelet[2083]: I0124 00:50:07.973756 2083 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a4cf9fac59edd34e2be50d03a80d36b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a4cf9fac59edd34e2be50d03a80d36b\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:50:07.973977 kubelet[2083]: I0124 00:50:07.973788 2083 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a4cf9fac59edd34e2be50d03a80d36b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a4cf9fac59edd34e2be50d03a80d36b\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:50:07.973977 kubelet[2083]: I0124 00:50:07.973810 2083 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:08.034685 kubelet[2083]: I0124 00:50:08.034509 2083 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:50:08.035161 kubelet[2083]: E0124 00:50:08.035066 2083 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Jan 24 00:50:08.170900 kubelet[2083]: E0124 00:50:08.170737 2083 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" Jan 24 00:50:08.234039 kubelet[2083]: E0124 00:50:08.233902 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:08.235056 containerd[1457]: time="2026-01-24T00:50:08.234988869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a4cf9fac59edd34e2be50d03a80d36b,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:08.242103 kubelet[2083]: E0124 00:50:08.241987 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:08.242896 containerd[1457]: time="2026-01-24T00:50:08.242636494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:08.244022 kubelet[2083]: E0124 00:50:08.243985 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:08.244502 containerd[1457]: time="2026-01-24T00:50:08.244383279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:08.436860 kubelet[2083]: I0124 00:50:08.436731 2083 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:50:08.437273 kubelet[2083]: E0124 00:50:08.437193 2083 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Jan 24 00:50:08.575883 kubelet[2083]: W0124 00:50:08.575787 2083 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jan 24 00:50:08.575883 kubelet[2083]: E0124 00:50:08.575885 2083 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:50:08.671800 kubelet[2083]: W0124 00:50:08.671571 2083 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jan 24 00:50:08.671800 kubelet[2083]: E0124 00:50:08.671653 2083 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:50:08.813126 kubelet[2083]: W0124 00:50:08.812864 2083 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jan 24 00:50:08.813126 kubelet[2083]: E0124 00:50:08.812950 2083 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:50:08.890574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705729811.mount: Deactivated successfully. Jan 24 00:50:08.898315 containerd[1457]: time="2026-01-24T00:50:08.898220409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:50:08.901675 containerd[1457]: time="2026-01-24T00:50:08.901613827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:50:08.902779 containerd[1457]: time="2026-01-24T00:50:08.902696038Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:50:08.903920 containerd[1457]: time="2026-01-24T00:50:08.903879465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:50:08.905262 containerd[1457]: time="2026-01-24T00:50:08.905026686Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:50:08.906237 containerd[1457]: time="2026-01-24T00:50:08.906183324Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:50:08.907131 containerd[1457]: time="2026-01-24T00:50:08.907047245Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:50:08.911156 containerd[1457]: time="2026-01-24T00:50:08.911103627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:50:08.914638 containerd[1457]: time="2026-01-24T00:50:08.914531397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 679.457108ms" Jan 24 00:50:08.920784 containerd[1457]: time="2026-01-24T00:50:08.920734240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 676.190548ms" Jan 24 00:50:08.921514 containerd[1457]: time="2026-01-24T00:50:08.921473117Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 678.776949ms" Jan 24 00:50:08.934230 kubelet[2083]: W0124 00:50:08.934102 2083 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Jan 24 00:50:08.934230 kubelet[2083]: E0124 00:50:08.934186 2083 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:50:08.971967 kubelet[2083]: E0124 00:50:08.971796 2083 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="1.6s" Jan 24 00:50:09.044254 containerd[1457]: time="2026-01-24T00:50:09.043341457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:50:09.044254 containerd[1457]: time="2026-01-24T00:50:09.043495851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:50:09.044254 containerd[1457]: time="2026-01-24T00:50:09.043514850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:09.044254 containerd[1457]: time="2026-01-24T00:50:09.043693109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:09.047704 containerd[1457]: time="2026-01-24T00:50:09.047589095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:50:09.047704 containerd[1457]: time="2026-01-24T00:50:09.047659919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:50:09.047704 containerd[1457]: time="2026-01-24T00:50:09.047671104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:09.047827 containerd[1457]: time="2026-01-24T00:50:09.047763960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:09.049704 containerd[1457]: time="2026-01-24T00:50:09.049111503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:50:09.049704 containerd[1457]: time="2026-01-24T00:50:09.049206852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:50:09.049704 containerd[1457]: time="2026-01-24T00:50:09.049230586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:09.049704 containerd[1457]: time="2026-01-24T00:50:09.049306487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:09.073755 systemd[1]: Started cri-containerd-1476cef94adfb9cedc1d1b4c4dde71dd246d0b674dd30fe21436e60ac3a20fab.scope - libcontainer container 1476cef94adfb9cedc1d1b4c4dde71dd246d0b674dd30fe21436e60ac3a20fab. Jan 24 00:50:09.078238 systemd[1]: Started cri-containerd-b1bba070c3fa4e87cf65ef702d087a248040d3b4aa4a9fcfbc51335f54e78ceb.scope - libcontainer container b1bba070c3fa4e87cf65ef702d087a248040d3b4aa4a9fcfbc51335f54e78ceb. Jan 24 00:50:09.100712 systemd[1]: Started cri-containerd-43f825839c39a49e03f621cc2fd27d23034cbbed7d264d510af2ab4fb74da1c2.scope - libcontainer container 43f825839c39a49e03f621cc2fd27d23034cbbed7d264d510af2ab4fb74da1c2. Jan 24 00:50:09.132968 containerd[1457]: time="2026-01-24T00:50:09.132662758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1476cef94adfb9cedc1d1b4c4dde71dd246d0b674dd30fe21436e60ac3a20fab\"" Jan 24 00:50:09.136052 kubelet[2083]: E0124 00:50:09.136030 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:09.139188 containerd[1457]: time="2026-01-24T00:50:09.138998138Z" level=info msg="CreateContainer within sandbox \"1476cef94adfb9cedc1d1b4c4dde71dd246d0b674dd30fe21436e60ac3a20fab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:50:09.148065 containerd[1457]: time="2026-01-24T00:50:09.147990682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1bba070c3fa4e87cf65ef702d087a248040d3b4aa4a9fcfbc51335f54e78ceb\"" Jan 24 00:50:09.150920 kubelet[2083]: E0124 00:50:09.150663 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:09.153016 containerd[1457]: time="2026-01-24T00:50:09.152882965Z" level=info msg="CreateContainer within sandbox \"b1bba070c3fa4e87cf65ef702d087a248040d3b4aa4a9fcfbc51335f54e78ceb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:50:09.161345 containerd[1457]: time="2026-01-24T00:50:09.161278985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a4cf9fac59edd34e2be50d03a80d36b,Namespace:kube-system,Attempt:0,} returns sandbox id \"43f825839c39a49e03f621cc2fd27d23034cbbed7d264d510af2ab4fb74da1c2\"" Jan 24 00:50:09.162155 kubelet[2083]: E0124 00:50:09.162091 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:09.167248 containerd[1457]: time="2026-01-24T00:50:09.167176576Z" level=info msg="CreateContainer within sandbox \"43f825839c39a49e03f621cc2fd27d23034cbbed7d264d510af2ab4fb74da1c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:50:09.167989 containerd[1457]: time="2026-01-24T00:50:09.167872548Z" level=info msg="CreateContainer within sandbox \"1476cef94adfb9cedc1d1b4c4dde71dd246d0b674dd30fe21436e60ac3a20fab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"578216ae016fa6fe6b7a812d6f41850ab7821e34d3e5c2206a920e9273e4f058\"" Jan 24 00:50:09.168516 containerd[1457]: time="2026-01-24T00:50:09.168445923Z" level=info msg="StartContainer for \"578216ae016fa6fe6b7a812d6f41850ab7821e34d3e5c2206a920e9273e4f058\"" Jan 24 00:50:09.187023 containerd[1457]: time="2026-01-24T00:50:09.186891161Z" level=info msg="CreateContainer within sandbox \"b1bba070c3fa4e87cf65ef702d087a248040d3b4aa4a9fcfbc51335f54e78ceb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c6fda222a2efbb6cca1fc6c7ce3acd815ec39b4b20ba935d562cc06f33544268\"" Jan 24 00:50:09.187758 containerd[1457]: time="2026-01-24T00:50:09.187681207Z" level=info msg="StartContainer for \"c6fda222a2efbb6cca1fc6c7ce3acd815ec39b4b20ba935d562cc06f33544268\"" Jan 24 00:50:09.190537 containerd[1457]: time="2026-01-24T00:50:09.190473738Z" level=info msg="CreateContainer within sandbox \"43f825839c39a49e03f621cc2fd27d23034cbbed7d264d510af2ab4fb74da1c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"758c18baafd338b95db4d6af074690d120238556e3f3eadccf2ddfff4d989ac0\"" Jan 24 00:50:09.190906 containerd[1457]: time="2026-01-24T00:50:09.190885951Z" level=info msg="StartContainer for \"758c18baafd338b95db4d6af074690d120238556e3f3eadccf2ddfff4d989ac0\"" Jan 24 00:50:09.210902 systemd[1]: Started cri-containerd-578216ae016fa6fe6b7a812d6f41850ab7821e34d3e5c2206a920e9273e4f058.scope - libcontainer container 578216ae016fa6fe6b7a812d6f41850ab7821e34d3e5c2206a920e9273e4f058. Jan 24 00:50:09.233579 systemd[1]: Started cri-containerd-c6fda222a2efbb6cca1fc6c7ce3acd815ec39b4b20ba935d562cc06f33544268.scope - libcontainer container c6fda222a2efbb6cca1fc6c7ce3acd815ec39b4b20ba935d562cc06f33544268. Jan 24 00:50:09.238908 systemd[1]: Started cri-containerd-758c18baafd338b95db4d6af074690d120238556e3f3eadccf2ddfff4d989ac0.scope - libcontainer container 758c18baafd338b95db4d6af074690d120238556e3f3eadccf2ddfff4d989ac0. Jan 24 00:50:09.240665 kubelet[2083]: I0124 00:50:09.239669 2083 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:50:09.240665 kubelet[2083]: E0124 00:50:09.240535 2083 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Jan 24 00:50:09.296620 containerd[1457]: time="2026-01-24T00:50:09.295872707Z" level=info msg="StartContainer for \"c6fda222a2efbb6cca1fc6c7ce3acd815ec39b4b20ba935d562cc06f33544268\" returns successfully" Jan 24 00:50:09.296620 containerd[1457]: time="2026-01-24T00:50:09.295951883Z" level=info msg="StartContainer for \"578216ae016fa6fe6b7a812d6f41850ab7821e34d3e5c2206a920e9273e4f058\" returns successfully" Jan 24 00:50:09.314822 containerd[1457]: time="2026-01-24T00:50:09.314785221Z" level=info msg="StartContainer for \"758c18baafd338b95db4d6af074690d120238556e3f3eadccf2ddfff4d989ac0\" returns successfully" Jan 24 00:50:09.634050 kubelet[2083]: E0124 00:50:09.633982 2083 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:50:09.636687 kubelet[2083]: E0124 00:50:09.634148 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:09.638829 kubelet[2083]: E0124 00:50:09.638775 2083 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:50:09.638969 kubelet[2083]: E0124 00:50:09.638918 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:09.646278 kubelet[2083]: E0124 00:50:09.646220 2083 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:50:09.646482 kubelet[2083]: E0124 00:50:09.646378 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:10.654903 kubelet[2083]: E0124 00:50:10.653710 2083 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:50:10.656534 kubelet[2083]: E0124 00:50:10.655161 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:10.656534 kubelet[2083]: E0124 00:50:10.655475 2083 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:50:10.656534 kubelet[2083]: E0124 00:50:10.655595 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:10.675188 kubelet[2083]: E0124 00:50:10.675049 2083 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 24 00:50:10.843756 kubelet[2083]: I0124 00:50:10.843596 2083 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:50:10.852707 kubelet[2083]: I0124 00:50:10.852667 2083 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:50:10.852884 kubelet[2083]: E0124 00:50:10.852715 2083 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 24 00:50:10.860517 kubelet[2083]: E0124 00:50:10.860495 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:10.961849 kubelet[2083]: E0124 00:50:10.961516 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:11.062837 kubelet[2083]: E0124 00:50:11.062724 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:11.163311 kubelet[2083]: E0124 00:50:11.163176 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:11.264832 kubelet[2083]: E0124 00:50:11.264493 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:11.365349 kubelet[2083]: E0124 00:50:11.365188 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:11.466059 kubelet[2083]: E0124 00:50:11.465876 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:11.567103 kubelet[2083]: E0124 00:50:11.566800 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:11.631658 kubelet[2083]: E0124 00:50:11.631566 2083 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:50:11.632209 kubelet[2083]: E0124 00:50:11.632154 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:11.667254 kubelet[2083]: E0124 00:50:11.667076 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:11.767353 kubelet[2083]: E0124 00:50:11.767251 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:11.868763 kubelet[2083]: E0124 00:50:11.868516 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:11.969096 kubelet[2083]: E0124 00:50:11.968964 2083 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:12.068374 kubelet[2083]: I0124 00:50:12.068264 2083 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:12.078642 kubelet[2083]: I0124 00:50:12.078094 2083 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:50:12.083837 kubelet[2083]: I0124 00:50:12.083630 2083 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:50:12.558868 kubelet[2083]: I0124 00:50:12.558721 2083 apiserver.go:52] "Watching apiserver" Jan 24 00:50:12.562785 kubelet[2083]: E0124 00:50:12.562109 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:12.562785 kubelet[2083]: E0124 00:50:12.562597 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:12.562785 kubelet[2083]: E0124 00:50:12.562651 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:12.568985 kubelet[2083]: I0124 00:50:12.568905 2083 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:50:13.209923 systemd[1]: Reloading requested from client PID 2364 ('systemctl') (unit session-5.scope)... Jan 24 00:50:13.209971 systemd[1]: Reloading... Jan 24 00:50:13.309610 zram_generator::config[2406]: No configuration found. Jan 24 00:50:13.435235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:50:13.536207 systemd[1]: Reloading finished in 325 ms. Jan 24 00:50:13.597938 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:50:13.612067 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:50:13.612666 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:13.626110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:50:13.806314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:13.812244 (kubelet)[2448]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:50:13.886988 kubelet[2448]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:50:13.886988 kubelet[2448]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:50:13.886988 kubelet[2448]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:50:13.886988 kubelet[2448]: I0124 00:50:13.886984 2448 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:50:13.900201 kubelet[2448]: I0124 00:50:13.900098 2448 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:50:13.900201 kubelet[2448]: I0124 00:50:13.900174 2448 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:50:13.901118 kubelet[2448]: I0124 00:50:13.901047 2448 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:50:13.905255 kubelet[2448]: I0124 00:50:13.905139 2448 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:50:13.910162 kubelet[2448]: I0124 00:50:13.910050 2448 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:50:13.916610 kubelet[2448]: E0124 00:50:13.916547 2448 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:50:13.916610 kubelet[2448]: I0124 00:50:13.916591 2448 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:50:13.928452 kubelet[2448]: I0124 00:50:13.928045 2448 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:50:13.928660 kubelet[2448]: I0124 00:50:13.928511 2448 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:50:13.929496 kubelet[2448]: I0124 00:50:13.928545 2448 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:50:13.929496 kubelet[2448]: I0124 00:50:13.929255 2448 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:50:13.929496 kubelet[2448]: I0124 00:50:13.929271 2448 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:50:13.929496 kubelet[2448]: I0124 00:50:13.929336 2448 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:50:13.931276 kubelet[2448]: I0124 00:50:13.931183 2448 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:50:13.931276 kubelet[2448]: I0124 00:50:13.931233 2448 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:50:13.931276 kubelet[2448]: I0124 00:50:13.931280 2448 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:50:13.931550 kubelet[2448]: I0124 00:50:13.931292 2448 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:50:13.933749 kubelet[2448]: I0124 00:50:13.933663 2448 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:50:13.934303 kubelet[2448]: I0124 00:50:13.934245 2448 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:50:13.935477 kubelet[2448]: I0124 00:50:13.935273 2448 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:50:13.935621 kubelet[2448]: I0124 00:50:13.935485 2448 server.go:1287] "Started kubelet" Jan 24 00:50:13.936501 kubelet[2448]: I0124 00:50:13.936469 2448 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:50:13.937534 kubelet[2448]: I0124 00:50:13.937460 2448 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:50:13.940555 kubelet[2448]: I0124 00:50:13.938628 2448 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:50:13.940555 kubelet[2448]: I0124 00:50:13.938905 2448 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:50:13.947631 kubelet[2448]: I0124 00:50:13.946966 2448 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:50:13.947631 kubelet[2448]: I0124 00:50:13.947213 2448 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:50:13.947631 kubelet[2448]: I0124 00:50:13.947523 2448 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:50:13.955119 kubelet[2448]: E0124 00:50:13.951678 2448 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:50:13.955119 kubelet[2448]: I0124 00:50:13.951941 2448 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:50:13.955119 kubelet[2448]: I0124 00:50:13.951956 2448 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:50:13.955119 kubelet[2448]: I0124 00:50:13.954644 2448 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:50:13.955119 kubelet[2448]: I0124 00:50:13.954736 2448 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:50:13.961930 kubelet[2448]: I0124 00:50:13.961897 2448 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:50:13.962892 kubelet[2448]: E0124 00:50:13.961922 2448 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:50:13.985903 kubelet[2448]: I0124 00:50:13.985741 2448 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:50:13.992290 kubelet[2448]: I0124 00:50:13.992234 2448 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:50:13.992543 kubelet[2448]: I0124 00:50:13.992502 2448 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:50:13.993544 kubelet[2448]: I0124 00:50:13.992535 2448 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:50:13.993633 kubelet[2448]: I0124 00:50:13.993622 2448 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:50:13.993862 kubelet[2448]: E0124 00:50:13.993773 2448 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:50:14.046210 kubelet[2448]: I0124 00:50:14.046073 2448 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:50:14.046210 kubelet[2448]: I0124 00:50:14.046123 2448 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:50:14.046210 kubelet[2448]: I0124 00:50:14.046144 2448 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:50:14.046675 kubelet[2448]: I0124 00:50:14.046307 2448 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:50:14.046675 kubelet[2448]: I0124 00:50:14.046374 2448 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:50:14.046675 kubelet[2448]: I0124 00:50:14.046477 2448 policy_none.go:49] "None policy: Start" Jan 24 00:50:14.046675 kubelet[2448]: I0124 00:50:14.046491 2448 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:50:14.046675 kubelet[2448]: I0124 00:50:14.046503 2448 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:50:14.046675 kubelet[2448]: I0124 00:50:14.046608 2448 state_mem.go:75] "Updated machine memory state" Jan 24 00:50:14.057776 kubelet[2448]: I0124 00:50:14.057595 2448 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:50:14.057888 kubelet[2448]: I0124 00:50:14.057828 2448 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:50:14.057888 kubelet[2448]: I0124 00:50:14.057839 2448 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:50:14.058106 kubelet[2448]: I0124 00:50:14.058085 2448 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:50:14.062244 kubelet[2448]: E0124 00:50:14.062034 2448 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:50:14.095149 kubelet[2448]: I0124 00:50:14.095007 2448 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:50:14.096875 kubelet[2448]: I0124 00:50:14.096838 2448 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:14.097960 kubelet[2448]: I0124 00:50:14.097743 2448 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:50:14.108603 kubelet[2448]: E0124 00:50:14.108247 2448 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:14.108603 kubelet[2448]: E0124 00:50:14.108509 2448 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:50:14.109676 kubelet[2448]: E0124 00:50:14.109543 2448 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:50:14.180652 kubelet[2448]: I0124 00:50:14.180380 2448 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:50:14.199144 kubelet[2448]: I0124 00:50:14.199028 2448 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 00:50:14.199350 kubelet[2448]: I0124 00:50:14.199167 2448 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:50:14.254203 kubelet[2448]: I0124 00:50:14.254050 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:14.254203 kubelet[2448]: I0124 00:50:14.254156 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:14.254628 kubelet[2448]: I0124 00:50:14.254200 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a4cf9fac59edd34e2be50d03a80d36b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a4cf9fac59edd34e2be50d03a80d36b\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:50:14.254628 kubelet[2448]: I0124 00:50:14.254332 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a4cf9fac59edd34e2be50d03a80d36b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a4cf9fac59edd34e2be50d03a80d36b\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:50:14.254628 kubelet[2448]: I0124 00:50:14.254361 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:14.254628 kubelet[2448]: I0124 00:50:14.254382 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:14.254628 kubelet[2448]: I0124 00:50:14.254556 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:50:14.254749 kubelet[2448]: I0124 00:50:14.254584 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:50:14.254749 kubelet[2448]: I0124 00:50:14.254607 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a4cf9fac59edd34e2be50d03a80d36b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a4cf9fac59edd34e2be50d03a80d36b\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:50:14.409105 kubelet[2448]: E0124 00:50:14.408964 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:14.409105 kubelet[2448]: E0124 00:50:14.409095 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:14.410686 kubelet[2448]: E0124 00:50:14.410520 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:14.933124 kubelet[2448]: I0124 00:50:14.932946 2448 apiserver.go:52] "Watching apiserver" Jan 24 00:50:14.953295 kubelet[2448]: I0124 00:50:14.953233 2448 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:50:15.015874 kubelet[2448]: I0124 00:50:15.015725 2448 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:50:15.016357 kubelet[2448]: E0124 00:50:15.015919 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:15.016845 kubelet[2448]: E0124 00:50:15.016694 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:15.029178 kubelet[2448]: E0124 00:50:15.028988 2448 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:50:15.029582 kubelet[2448]: E0124 00:50:15.029271 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:15.107063 kubelet[2448]: I0124 00:50:15.105862 2448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.105837881 podStartE2EDuration="3.105837881s" podCreationTimestamp="2026-01-24 00:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:50:15.105790139 +0000 UTC m=+1.288639208" watchObservedRunningTime="2026-01-24 00:50:15.105837881 +0000 UTC m=+1.288686980" Jan 24 00:50:15.107063 kubelet[2448]: I0124 00:50:15.106040 2448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.106032789 podStartE2EDuration="3.106032789s" podCreationTimestamp="2026-01-24 00:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:50:15.069032929 +0000 UTC m=+1.251882008" watchObservedRunningTime="2026-01-24 00:50:15.106032789 +0000 UTC m=+1.288881858" Jan 24 00:50:15.131482 kubelet[2448]: I0124 00:50:15.131304 2448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.131283172 podStartE2EDuration="3.131283172s" podCreationTimestamp="2026-01-24 00:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:50:15.118830395 +0000 UTC m=+1.301679474" watchObservedRunningTime="2026-01-24 00:50:15.131283172 +0000 UTC m=+1.314132242" Jan 24 00:50:15.423678 sudo[1593]: pam_unix(sudo:session): session closed for user root Jan 24 00:50:15.427865 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 24 00:50:15.433746 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:38368.service: Deactivated successfully. Jan 24 00:50:15.436837 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:50:15.437181 systemd[1]: session-5.scope: Consumed 3.939s CPU time, 160.9M memory peak, 0B memory swap peak. Jan 24 00:50:15.439180 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:50:15.441223 systemd-logind[1441]: Removed session 5. Jan 24 00:50:16.018338 kubelet[2448]: E0124 00:50:16.018220 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:16.019109 kubelet[2448]: E0124 00:50:16.018610 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:18.160843 kubelet[2448]: I0124 00:50:18.160605 2448 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:50:18.161499 containerd[1457]: time="2026-01-24T00:50:18.161280159Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:50:18.161920 kubelet[2448]: I0124 00:50:18.161720 2448 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:50:18.823869 systemd[1]: Created slice kubepods-besteffort-podb3ccd6a7_c67f_4010_adaf_0ee09b5cb5e4.slice - libcontainer container kubepods-besteffort-podb3ccd6a7_c67f_4010_adaf_0ee09b5cb5e4.slice. Jan 24 00:50:18.851208 systemd[1]: Created slice kubepods-burstable-pod63137e90_9d72_4ba4_80d4_33d443f3972d.slice - libcontainer container kubepods-burstable-pod63137e90_9d72_4ba4_80d4_33d443f3972d.slice. Jan 24 00:50:18.893130 kubelet[2448]: I0124 00:50:18.892939 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh52w\" (UniqueName: \"kubernetes.io/projected/b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4-kube-api-access-nh52w\") pod \"kube-proxy-bzcx6\" (UID: \"b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4\") " pod="kube-system/kube-proxy-bzcx6" Jan 24 00:50:18.893130 kubelet[2448]: I0124 00:50:18.893030 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/63137e90-9d72-4ba4-80d4-33d443f3972d-flannel-cfg\") pod \"kube-flannel-ds-mr2xj\" (UID: \"63137e90-9d72-4ba4-80d4-33d443f3972d\") " pod="kube-flannel/kube-flannel-ds-mr2xj" Jan 24 00:50:18.893130 kubelet[2448]: I0124 00:50:18.893058 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sqbq\" (UniqueName: \"kubernetes.io/projected/63137e90-9d72-4ba4-80d4-33d443f3972d-kube-api-access-8sqbq\") pod \"kube-flannel-ds-mr2xj\" (UID: \"63137e90-9d72-4ba4-80d4-33d443f3972d\") " pod="kube-flannel/kube-flannel-ds-mr2xj" Jan 24 00:50:18.893130 kubelet[2448]: I0124 00:50:18.893081 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4-lib-modules\") pod \"kube-proxy-bzcx6\" (UID: \"b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4\") " pod="kube-system/kube-proxy-bzcx6" Jan 24 00:50:18.893130 kubelet[2448]: I0124 00:50:18.893102 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/63137e90-9d72-4ba4-80d4-33d443f3972d-cni\") pod \"kube-flannel-ds-mr2xj\" (UID: \"63137e90-9d72-4ba4-80d4-33d443f3972d\") " pod="kube-flannel/kube-flannel-ds-mr2xj" Jan 24 00:50:18.893618 kubelet[2448]: I0124 00:50:18.893124 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4-xtables-lock\") pod \"kube-proxy-bzcx6\" (UID: \"b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4\") " pod="kube-system/kube-proxy-bzcx6" Jan 24 00:50:18.893618 kubelet[2448]: I0124 00:50:18.893143 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/63137e90-9d72-4ba4-80d4-33d443f3972d-run\") pod \"kube-flannel-ds-mr2xj\" (UID: \"63137e90-9d72-4ba4-80d4-33d443f3972d\") " pod="kube-flannel/kube-flannel-ds-mr2xj" Jan 24 00:50:18.893618 kubelet[2448]: I0124 00:50:18.893176 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/63137e90-9d72-4ba4-80d4-33d443f3972d-cni-plugin\") pod \"kube-flannel-ds-mr2xj\" (UID: \"63137e90-9d72-4ba4-80d4-33d443f3972d\") " pod="kube-flannel/kube-flannel-ds-mr2xj" Jan 24 00:50:18.893618 kubelet[2448]: I0124 00:50:18.893200 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4-kube-proxy\") pod \"kube-proxy-bzcx6\" (UID: \"b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4\") " pod="kube-system/kube-proxy-bzcx6" Jan 24 00:50:18.893618 kubelet[2448]: I0124 00:50:18.893223 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63137e90-9d72-4ba4-80d4-33d443f3972d-xtables-lock\") pod \"kube-flannel-ds-mr2xj\" (UID: \"63137e90-9d72-4ba4-80d4-33d443f3972d\") " pod="kube-flannel/kube-flannel-ds-mr2xj" Jan 24 00:50:19.002299 kubelet[2448]: E0124 00:50:19.002206 2448 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 24 00:50:19.002299 kubelet[2448]: E0124 00:50:19.002234 2448 projected.go:194] Error preparing data for projected volume kube-api-access-8sqbq for pod kube-flannel/kube-flannel-ds-mr2xj: configmap "kube-root-ca.crt" not found Jan 24 00:50:19.002299 kubelet[2448]: E0124 00:50:19.002278 2448 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63137e90-9d72-4ba4-80d4-33d443f3972d-kube-api-access-8sqbq podName:63137e90-9d72-4ba4-80d4-33d443f3972d nodeName:}" failed. No retries permitted until 2026-01-24 00:50:19.502261006 +0000 UTC m=+5.685110076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sqbq" (UniqueName: "kubernetes.io/projected/63137e90-9d72-4ba4-80d4-33d443f3972d-kube-api-access-8sqbq") pod "kube-flannel-ds-mr2xj" (UID: "63137e90-9d72-4ba4-80d4-33d443f3972d") : configmap "kube-root-ca.crt" not found Jan 24 00:50:19.002791 kubelet[2448]: E0124 00:50:19.002767 2448 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 24 00:50:19.002791 kubelet[2448]: E0124 00:50:19.002785 2448 projected.go:194] Error preparing data for projected volume kube-api-access-nh52w for pod kube-system/kube-proxy-bzcx6: configmap "kube-root-ca.crt" not found Jan 24 00:50:19.002942 kubelet[2448]: E0124 00:50:19.002814 2448 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4-kube-api-access-nh52w podName:b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4 nodeName:}" failed. No retries permitted until 2026-01-24 00:50:19.502802158 +0000 UTC m=+5.685651227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nh52w" (UniqueName: "kubernetes.io/projected/b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4-kube-api-access-nh52w") pod "kube-proxy-bzcx6" (UID: "b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4") : configmap "kube-root-ca.crt" not found Jan 24 00:50:19.749460 kubelet[2448]: E0124 00:50:19.749338 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:19.750716 containerd[1457]: time="2026-01-24T00:50:19.750672706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzcx6,Uid:b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:19.755187 kubelet[2448]: E0124 00:50:19.754992 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:19.756522 containerd[1457]: time="2026-01-24T00:50:19.756352000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mr2xj,Uid:63137e90-9d72-4ba4-80d4-33d443f3972d,Namespace:kube-flannel,Attempt:0,}" Jan 24 00:50:19.792609 containerd[1457]: time="2026-01-24T00:50:19.792216807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:50:19.792609 containerd[1457]: time="2026-01-24T00:50:19.792315961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:50:19.792609 containerd[1457]: time="2026-01-24T00:50:19.792338951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:19.795454 containerd[1457]: time="2026-01-24T00:50:19.794932362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:19.801630 containerd[1457]: time="2026-01-24T00:50:19.798597000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:50:19.801630 containerd[1457]: time="2026-01-24T00:50:19.798689282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:50:19.801630 containerd[1457]: time="2026-01-24T00:50:19.798700441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:19.801630 containerd[1457]: time="2026-01-24T00:50:19.798813269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:19.820652 systemd[1]: Started cri-containerd-b8850b785ccfe0614c0ddc21e9e84fafd07b1ddd7e748f8fa09fcb55d5d593ca.scope - libcontainer container b8850b785ccfe0614c0ddc21e9e84fafd07b1ddd7e748f8fa09fcb55d5d593ca. Jan 24 00:50:19.830061 systemd[1]: Started cri-containerd-5b0bf0188e492ddda9f92a5cabf111b16ae140da9ffdd71202a83f07532ec021.scope - libcontainer container 5b0bf0188e492ddda9f92a5cabf111b16ae140da9ffdd71202a83f07532ec021. Jan 24 00:50:19.871028 containerd[1457]: time="2026-01-24T00:50:19.870951667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzcx6,Uid:b3ccd6a7-c67f-4010-adaf-0ee09b5cb5e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8850b785ccfe0614c0ddc21e9e84fafd07b1ddd7e748f8fa09fcb55d5d593ca\"" Jan 24 00:50:19.873723 kubelet[2448]: E0124 00:50:19.873659 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:19.876802 containerd[1457]: time="2026-01-24T00:50:19.876775794Z" level=info msg="CreateContainer within sandbox \"b8850b785ccfe0614c0ddc21e9e84fafd07b1ddd7e748f8fa09fcb55d5d593ca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:50:19.888499 containerd[1457]: time="2026-01-24T00:50:19.888306995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mr2xj,Uid:63137e90-9d72-4ba4-80d4-33d443f3972d,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"5b0bf0188e492ddda9f92a5cabf111b16ae140da9ffdd71202a83f07532ec021\"" Jan 24 00:50:19.889444 kubelet[2448]: E0124 00:50:19.889273 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:19.890222 containerd[1457]: time="2026-01-24T00:50:19.890202203Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 24 00:50:19.900807 containerd[1457]: time="2026-01-24T00:50:19.900761581Z" level=info msg="CreateContainer within sandbox \"b8850b785ccfe0614c0ddc21e9e84fafd07b1ddd7e748f8fa09fcb55d5d593ca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3322fc8220c816f262cbf5921d815928adf3a5a476f7b23395ca44ca2a93d5ed\"" Jan 24 00:50:19.904528 containerd[1457]: time="2026-01-24T00:50:19.902861168Z" level=info msg="StartContainer for \"3322fc8220c816f262cbf5921d815928adf3a5a476f7b23395ca44ca2a93d5ed\"" Jan 24 00:50:19.952083 systemd[1]: Started cri-containerd-3322fc8220c816f262cbf5921d815928adf3a5a476f7b23395ca44ca2a93d5ed.scope - libcontainer container 3322fc8220c816f262cbf5921d815928adf3a5a476f7b23395ca44ca2a93d5ed. Jan 24 00:50:19.986569 containerd[1457]: time="2026-01-24T00:50:19.986373602Z" level=info msg="StartContainer for \"3322fc8220c816f262cbf5921d815928adf3a5a476f7b23395ca44ca2a93d5ed\" returns successfully" Jan 24 00:50:20.028743 kubelet[2448]: E0124 00:50:20.028494 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:20.040012 kubelet[2448]: I0124 00:50:20.039846 2448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bzcx6" podStartSLOduration=2.039831295 podStartE2EDuration="2.039831295s" podCreationTimestamp="2026-01-24 00:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:50:20.039625565 +0000 UTC m=+6.222474645" watchObservedRunningTime="2026-01-24 00:50:20.039831295 +0000 UTC m=+6.222680363" Jan 24 00:50:21.122626 kubelet[2448]: E0124 00:50:21.122558 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:21.196059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819771397.mount: Deactivated successfully. Jan 24 00:50:21.249497 containerd[1457]: time="2026-01-24T00:50:21.249281225Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:21.251314 containerd[1457]: time="2026-01-24T00:50:21.251210598Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 24 00:50:21.252890 containerd[1457]: time="2026-01-24T00:50:21.252794579Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:21.255500 containerd[1457]: time="2026-01-24T00:50:21.255293161Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:21.258522 containerd[1457]: time="2026-01-24T00:50:21.257018751Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.366037551s" Jan 24 00:50:21.258522 containerd[1457]: time="2026-01-24T00:50:21.257059645Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 24 00:50:21.262793 containerd[1457]: time="2026-01-24T00:50:21.262705496Z" level=info msg="CreateContainer within sandbox \"5b0bf0188e492ddda9f92a5cabf111b16ae140da9ffdd71202a83f07532ec021\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 24 00:50:21.281992 containerd[1457]: time="2026-01-24T00:50:21.281821779Z" level=info msg="CreateContainer within sandbox \"5b0bf0188e492ddda9f92a5cabf111b16ae140da9ffdd71202a83f07532ec021\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"38c5125d55d041cc9b607dea7362f7d71d10063a9a808acf0a1cfe366475d75f\"" Jan 24 00:50:21.282917 containerd[1457]: time="2026-01-24T00:50:21.282750234Z" level=info msg="StartContainer for \"38c5125d55d041cc9b607dea7362f7d71d10063a9a808acf0a1cfe366475d75f\"" Jan 24 00:50:21.324722 systemd[1]: Started cri-containerd-38c5125d55d041cc9b607dea7362f7d71d10063a9a808acf0a1cfe366475d75f.scope - libcontainer container 38c5125d55d041cc9b607dea7362f7d71d10063a9a808acf0a1cfe366475d75f. Jan 24 00:50:21.357873 systemd[1]: cri-containerd-38c5125d55d041cc9b607dea7362f7d71d10063a9a808acf0a1cfe366475d75f.scope: Deactivated successfully. Jan 24 00:50:21.359832 containerd[1457]: time="2026-01-24T00:50:21.359685037Z" level=info msg="StartContainer for \"38c5125d55d041cc9b607dea7362f7d71d10063a9a808acf0a1cfe366475d75f\" returns successfully" Jan 24 00:50:21.423504 containerd[1457]: time="2026-01-24T00:50:21.420287933Z" level=info msg="shim disconnected" id=38c5125d55d041cc9b607dea7362f7d71d10063a9a808acf0a1cfe366475d75f namespace=k8s.io Jan 24 00:50:21.423504 containerd[1457]: time="2026-01-24T00:50:21.423499734Z" level=warning msg="cleaning up after shim disconnected" id=38c5125d55d041cc9b607dea7362f7d71d10063a9a808acf0a1cfe366475d75f namespace=k8s.io Jan 24 00:50:21.423741 containerd[1457]: time="2026-01-24T00:50:21.423519639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:50:22.035893 kubelet[2448]: E0124 00:50:22.035706 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:22.035893 kubelet[2448]: E0124 00:50:22.035722 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:22.036955 containerd[1457]: time="2026-01-24T00:50:22.036900250Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 24 00:50:22.789887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount235849993.mount: Deactivated successfully. Jan 24 00:50:23.038381 kubelet[2448]: E0124 00:50:23.038249 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:23.547353 containerd[1457]: time="2026-01-24T00:50:23.547110804Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:23.548692 containerd[1457]: time="2026-01-24T00:50:23.548610433Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 24 00:50:23.550007 containerd[1457]: time="2026-01-24T00:50:23.549930620Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:23.554232 containerd[1457]: time="2026-01-24T00:50:23.554144494Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:23.556165 containerd[1457]: time="2026-01-24T00:50:23.555996229Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 1.519055897s" Jan 24 00:50:23.556165 containerd[1457]: time="2026-01-24T00:50:23.556154125Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 24 00:50:23.560432 containerd[1457]: time="2026-01-24T00:50:23.560341801Z" level=info msg="CreateContainer within sandbox \"5b0bf0188e492ddda9f92a5cabf111b16ae140da9ffdd71202a83f07532ec021\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:50:23.577537 containerd[1457]: time="2026-01-24T00:50:23.577368926Z" level=info msg="CreateContainer within sandbox \"5b0bf0188e492ddda9f92a5cabf111b16ae140da9ffdd71202a83f07532ec021\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"98ad3a288cd6961f50de130cce96395e218d64bf1bd32195837aa93947fd83c0\"" Jan 24 00:50:23.578358 containerd[1457]: time="2026-01-24T00:50:23.578161518Z" level=info msg="StartContainer for \"98ad3a288cd6961f50de130cce96395e218d64bf1bd32195837aa93947fd83c0\"" Jan 24 00:50:23.621791 systemd[1]: Started cri-containerd-98ad3a288cd6961f50de130cce96395e218d64bf1bd32195837aa93947fd83c0.scope - libcontainer container 98ad3a288cd6961f50de130cce96395e218d64bf1bd32195837aa93947fd83c0. Jan 24 00:50:23.661943 systemd[1]: cri-containerd-98ad3a288cd6961f50de130cce96395e218d64bf1bd32195837aa93947fd83c0.scope: Deactivated successfully. Jan 24 00:50:23.666017 containerd[1457]: time="2026-01-24T00:50:23.663750092Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63137e90_9d72_4ba4_80d4_33d443f3972d.slice/cri-containerd-98ad3a288cd6961f50de130cce96395e218d64bf1bd32195837aa93947fd83c0.scope/memory.events\": no such file or directory" Jan 24 00:50:23.671007 containerd[1457]: time="2026-01-24T00:50:23.670823255Z" level=info msg="StartContainer for \"98ad3a288cd6961f50de130cce96395e218d64bf1bd32195837aa93947fd83c0\" returns successfully" Jan 24 00:50:23.702651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98ad3a288cd6961f50de130cce96395e218d64bf1bd32195837aa93947fd83c0-rootfs.mount: Deactivated successfully. Jan 24 00:50:23.742955 containerd[1457]: time="2026-01-24T00:50:23.742865642Z" level=info msg="shim disconnected" id=98ad3a288cd6961f50de130cce96395e218d64bf1bd32195837aa93947fd83c0 namespace=k8s.io Jan 24 00:50:23.742955 containerd[1457]: time="2026-01-24T00:50:23.742934006Z" level=warning msg="cleaning up after shim disconnected" id=98ad3a288cd6961f50de130cce96395e218d64bf1bd32195837aa93947fd83c0 namespace=k8s.io Jan 24 00:50:23.742955 containerd[1457]: time="2026-01-24T00:50:23.742943363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:50:23.767737 kubelet[2448]: I0124 00:50:23.767702 2448 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:50:23.807924 systemd[1]: Created slice kubepods-burstable-pod36347f68_65a6_43dc_ad00_76c0e9e937f5.slice - libcontainer container kubepods-burstable-pod36347f68_65a6_43dc_ad00_76c0e9e937f5.slice. Jan 24 00:50:23.815152 systemd[1]: Created slice kubepods-burstable-pod9b3267e7_a914_4e13_8d9d_6be365114ede.slice - libcontainer container kubepods-burstable-pod9b3267e7_a914_4e13_8d9d_6be365114ede.slice. Jan 24 00:50:23.831204 kubelet[2448]: I0124 00:50:23.831010 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b3267e7-a914-4e13-8d9d-6be365114ede-config-volume\") pod \"coredns-668d6bf9bc-krxhx\" (UID: \"9b3267e7-a914-4e13-8d9d-6be365114ede\") " pod="kube-system/coredns-668d6bf9bc-krxhx" Jan 24 00:50:23.831204 kubelet[2448]: I0124 00:50:23.831193 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc2g8\" (UniqueName: \"kubernetes.io/projected/36347f68-65a6-43dc-ad00-76c0e9e937f5-kube-api-access-wc2g8\") pod \"coredns-668d6bf9bc-nj9sz\" (UID: \"36347f68-65a6-43dc-ad00-76c0e9e937f5\") " pod="kube-system/coredns-668d6bf9bc-nj9sz" Jan 24 00:50:23.831481 kubelet[2448]: I0124 00:50:23.831228 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffcm\" (UniqueName: \"kubernetes.io/projected/9b3267e7-a914-4e13-8d9d-6be365114ede-kube-api-access-6ffcm\") pod \"coredns-668d6bf9bc-krxhx\" (UID: \"9b3267e7-a914-4e13-8d9d-6be365114ede\") " pod="kube-system/coredns-668d6bf9bc-krxhx" Jan 24 00:50:23.831481 kubelet[2448]: I0124 00:50:23.831309 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36347f68-65a6-43dc-ad00-76c0e9e937f5-config-volume\") pod \"coredns-668d6bf9bc-nj9sz\" (UID: \"36347f68-65a6-43dc-ad00-76c0e9e937f5\") " pod="kube-system/coredns-668d6bf9bc-nj9sz" Jan 24 00:50:24.044743 kubelet[2448]: E0124 00:50:24.044650 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:24.047192 containerd[1457]: time="2026-01-24T00:50:24.047121320Z" level=info msg="CreateContainer within sandbox \"5b0bf0188e492ddda9f92a5cabf111b16ae140da9ffdd71202a83f07532ec021\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 24 00:50:24.070818 containerd[1457]: time="2026-01-24T00:50:24.070532372Z" level=info msg="CreateContainer within sandbox \"5b0bf0188e492ddda9f92a5cabf111b16ae140da9ffdd71202a83f07532ec021\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"52c7fbfb97aa16b2be0159e7e7164343f3225795175b769545013c37865d0d93\"" Jan 24 00:50:24.071517 containerd[1457]: time="2026-01-24T00:50:24.071357113Z" level=info msg="StartContainer for \"52c7fbfb97aa16b2be0159e7e7164343f3225795175b769545013c37865d0d93\"" Jan 24 00:50:24.110688 systemd[1]: Started cri-containerd-52c7fbfb97aa16b2be0159e7e7164343f3225795175b769545013c37865d0d93.scope - libcontainer container 52c7fbfb97aa16b2be0159e7e7164343f3225795175b769545013c37865d0d93. Jan 24 00:50:24.112503 kubelet[2448]: E0124 00:50:24.112364 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:24.113922 containerd[1457]: time="2026-01-24T00:50:24.113800739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nj9sz,Uid:36347f68-65a6-43dc-ad00-76c0e9e937f5,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:24.121155 kubelet[2448]: E0124 00:50:24.121075 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:24.122691 containerd[1457]: time="2026-01-24T00:50:24.122617983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-krxhx,Uid:9b3267e7-a914-4e13-8d9d-6be365114ede,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:24.184799 kubelet[2448]: E0124 00:50:24.181668 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:24.205850 containerd[1457]: time="2026-01-24T00:50:24.205796339Z" level=info msg="StartContainer for \"52c7fbfb97aa16b2be0159e7e7164343f3225795175b769545013c37865d0d93\" returns successfully" Jan 24 00:50:24.217905 kubelet[2448]: E0124 00:50:24.217834 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:24.237154 containerd[1457]: time="2026-01-24T00:50:24.237109828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nj9sz,Uid:36347f68-65a6-43dc-ad00-76c0e9e937f5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d03dbb4806bc65ca731dbe54019ad7458021f6efe2246032f357a0f5708e54f5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 24 00:50:24.237665 kubelet[2448]: E0124 00:50:24.237636 2448 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d03dbb4806bc65ca731dbe54019ad7458021f6efe2246032f357a0f5708e54f5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 24 00:50:24.237786 kubelet[2448]: E0124 00:50:24.237770 2448 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d03dbb4806bc65ca731dbe54019ad7458021f6efe2246032f357a0f5708e54f5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-nj9sz" Jan 24 00:50:24.237852 kubelet[2448]: E0124 00:50:24.237839 2448 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d03dbb4806bc65ca731dbe54019ad7458021f6efe2246032f357a0f5708e54f5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-nj9sz" Jan 24 00:50:24.237941 kubelet[2448]: E0124 00:50:24.237920 2448 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nj9sz_kube-system(36347f68-65a6-43dc-ad00-76c0e9e937f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nj9sz_kube-system(36347f68-65a6-43dc-ad00-76c0e9e937f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d03dbb4806bc65ca731dbe54019ad7458021f6efe2246032f357a0f5708e54f5\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-nj9sz" podUID="36347f68-65a6-43dc-ad00-76c0e9e937f5" Jan 24 00:50:24.242695 containerd[1457]: time="2026-01-24T00:50:24.242525184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-krxhx,Uid:9b3267e7-a914-4e13-8d9d-6be365114ede,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c05d7f22b9566de3484ce0adac40a0cb6fad86bb89341ba45ccdec05a721066\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 24 00:50:24.243145 kubelet[2448]: E0124 00:50:24.243122 2448 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c05d7f22b9566de3484ce0adac40a0cb6fad86bb89341ba45ccdec05a721066\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 24 00:50:24.243304 kubelet[2448]: E0124 00:50:24.243285 2448 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c05d7f22b9566de3484ce0adac40a0cb6fad86bb89341ba45ccdec05a721066\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-krxhx" Jan 24 00:50:24.243514 kubelet[2448]: E0124 00:50:24.243356 2448 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c05d7f22b9566de3484ce0adac40a0cb6fad86bb89341ba45ccdec05a721066\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-krxhx" Jan 24 00:50:24.243514 kubelet[2448]: E0124 00:50:24.243499 2448 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-krxhx_kube-system(9b3267e7-a914-4e13-8d9d-6be365114ede)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-krxhx_kube-system(9b3267e7-a914-4e13-8d9d-6be365114ede)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c05d7f22b9566de3484ce0adac40a0cb6fad86bb89341ba45ccdec05a721066\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-krxhx" podUID="9b3267e7-a914-4e13-8d9d-6be365114ede" Jan 24 00:50:25.049598 kubelet[2448]: E0124 00:50:25.049254 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:25.049598 kubelet[2448]: E0124 00:50:25.049367 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:25.050165 kubelet[2448]: E0124 00:50:25.049611 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:25.075277 kubelet[2448]: I0124 00:50:25.075104 2448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-mr2xj" podStartSLOduration=3.407524281 podStartE2EDuration="7.07508329s" podCreationTimestamp="2026-01-24 00:50:18 +0000 UTC" firstStartedPulling="2026-01-24 00:50:19.889786355 +0000 UTC m=+6.072635425" lastFinishedPulling="2026-01-24 00:50:23.557345364 +0000 UTC m=+9.740194434" observedRunningTime="2026-01-24 00:50:25.074653774 +0000 UTC m=+11.257502853" watchObservedRunningTime="2026-01-24 00:50:25.07508329 +0000 UTC m=+11.257932359" Jan 24 00:50:25.261097 systemd-networkd[1384]: flannel.1: Link UP Jan 24 00:50:25.261105 systemd-networkd[1384]: flannel.1: Gained carrier Jan 24 00:50:26.053526 kubelet[2448]: E0124 00:50:26.053350 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:26.054683 kubelet[2448]: E0124 00:50:26.054607 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:26.657673 systemd-networkd[1384]: flannel.1: Gained IPv6LL Jan 24 00:50:32.245682 update_engine[1442]: I20260124 00:50:32.245541 1442 update_attempter.cc:509] Updating boot flags... Jan 24 00:50:32.278483 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3130) Jan 24 00:50:32.317580 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3128) Jan 24 00:50:35.995551 kubelet[2448]: E0124 00:50:35.995484 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:35.996327 containerd[1457]: time="2026-01-24T00:50:35.996044719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-krxhx,Uid:9b3267e7-a914-4e13-8d9d-6be365114ede,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:36.033138 systemd-networkd[1384]: cni0: Link UP Jan 24 00:50:36.033179 systemd-networkd[1384]: cni0: Gained carrier Jan 24 00:50:36.039799 systemd-networkd[1384]: cni0: Lost carrier Jan 24 00:50:36.045207 systemd-networkd[1384]: vethfdea4f58: Link UP Jan 24 00:50:36.051716 kernel: cni0: port 1(vethfdea4f58) entered blocking state Jan 24 00:50:36.051876 kernel: cni0: port 1(vethfdea4f58) entered disabled state Jan 24 00:50:36.054786 kernel: vethfdea4f58: entered allmulticast mode Jan 24 00:50:36.063658 kernel: vethfdea4f58: entered promiscuous mode Jan 24 00:50:36.063710 kernel: cni0: port 1(vethfdea4f58) entered blocking state Jan 24 00:50:36.063728 kernel: cni0: port 1(vethfdea4f58) entered forwarding state Jan 24 00:50:36.067533 kernel: cni0: port 1(vethfdea4f58) entered disabled state Jan 24 00:50:36.084645 kernel: cni0: port 1(vethfdea4f58) entered blocking state Jan 24 00:50:36.084696 kernel: cni0: port 1(vethfdea4f58) entered forwarding state Jan 24 00:50:36.084546 systemd-networkd[1384]: vethfdea4f58: Gained carrier Jan 24 00:50:36.085582 systemd-networkd[1384]: cni0: Gained carrier Jan 24 00:50:36.089304 containerd[1457]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000ae8e8), "name":"cbr0", "type":"bridge"} Jan 24 00:50:36.089304 containerd[1457]: delegateAdd: netconf sent to delegate plugin: Jan 24 00:50:36.118011 containerd[1457]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-24T00:50:36.117871224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:50:36.118011 containerd[1457]: time="2026-01-24T00:50:36.117929109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:50:36.118011 containerd[1457]: time="2026-01-24T00:50:36.117942284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:36.118153 containerd[1457]: time="2026-01-24T00:50:36.118005841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:36.146571 systemd[1]: Started cri-containerd-a3b30b34a19460f716f7d6b2880844713969a5f2b704e7af459b2450ffb05c4b.scope - libcontainer container a3b30b34a19460f716f7d6b2880844713969a5f2b704e7af459b2450ffb05c4b. Jan 24 00:50:36.163035 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:50:36.194104 containerd[1457]: time="2026-01-24T00:50:36.194035245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-krxhx,Uid:9b3267e7-a914-4e13-8d9d-6be365114ede,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3b30b34a19460f716f7d6b2880844713969a5f2b704e7af459b2450ffb05c4b\"" Jan 24 00:50:36.195068 kubelet[2448]: E0124 00:50:36.194980 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:36.198110 containerd[1457]: time="2026-01-24T00:50:36.197961425Z" level=info msg="CreateContainer within sandbox \"a3b30b34a19460f716f7d6b2880844713969a5f2b704e7af459b2450ffb05c4b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:50:36.213912 containerd[1457]: time="2026-01-24T00:50:36.213779840Z" level=info msg="CreateContainer within sandbox \"a3b30b34a19460f716f7d6b2880844713969a5f2b704e7af459b2450ffb05c4b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f367335fd8e268a101790888ccfbe30b8563b3f215de22265d401f87b3ef89dd\"" Jan 24 00:50:36.214571 containerd[1457]: time="2026-01-24T00:50:36.214524069Z" level=info msg="StartContainer for \"f367335fd8e268a101790888ccfbe30b8563b3f215de22265d401f87b3ef89dd\"" Jan 24 00:50:36.249570 systemd[1]: Started cri-containerd-f367335fd8e268a101790888ccfbe30b8563b3f215de22265d401f87b3ef89dd.scope - libcontainer container f367335fd8e268a101790888ccfbe30b8563b3f215de22265d401f87b3ef89dd. Jan 24 00:50:36.281810 containerd[1457]: time="2026-01-24T00:50:36.281747207Z" level=info msg="StartContainer for \"f367335fd8e268a101790888ccfbe30b8563b3f215de22265d401f87b3ef89dd\" returns successfully" Jan 24 00:50:36.995643 kubelet[2448]: E0124 00:50:36.995558 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:36.996210 containerd[1457]: time="2026-01-24T00:50:36.996065263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nj9sz,Uid:36347f68-65a6-43dc-ad00-76c0e9e937f5,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:37.022869 systemd-networkd[1384]: vethc4855a77: Link UP Jan 24 00:50:37.026904 kernel: cni0: port 2(vethc4855a77) entered blocking state Jan 24 00:50:37.026967 kernel: cni0: port 2(vethc4855a77) entered disabled state Jan 24 00:50:37.028726 kernel: vethc4855a77: entered allmulticast mode Jan 24 00:50:37.030559 kernel: vethc4855a77: entered promiscuous mode Jan 24 00:50:37.030604 kernel: cni0: port 2(vethc4855a77) entered blocking state Jan 24 00:50:37.034304 kernel: cni0: port 2(vethc4855a77) entered forwarding state Jan 24 00:50:37.043475 kernel: cni0: port 2(vethc4855a77) entered disabled state Jan 24 00:50:37.049497 kernel: cni0: port 2(vethc4855a77) entered blocking state Jan 24 00:50:37.049553 kernel: cni0: port 2(vethc4855a77) entered forwarding state Jan 24 00:50:37.049722 systemd-networkd[1384]: vethc4855a77: Gained carrier Jan 24 00:50:37.052791 containerd[1457]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000022938), "name":"cbr0", "type":"bridge"} Jan 24 00:50:37.052791 containerd[1457]: delegateAdd: netconf sent to delegate plugin: Jan 24 00:50:37.081751 kubelet[2448]: E0124 00:50:37.081661 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:37.104373 kubelet[2448]: I0124 00:50:37.104252 2448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-krxhx" podStartSLOduration=18.104231583 podStartE2EDuration="18.104231583s" podCreationTimestamp="2026-01-24 00:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:50:37.090885903 +0000 UTC m=+23.273734982" watchObservedRunningTime="2026-01-24 00:50:37.104231583 +0000 UTC m=+23.287080662" Jan 24 00:50:37.112273 containerd[1457]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-24T00:50:37.110439611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:50:37.112273 containerd[1457]: time="2026-01-24T00:50:37.110514679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:50:37.112273 containerd[1457]: time="2026-01-24T00:50:37.110529205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:37.112273 containerd[1457]: time="2026-01-24T00:50:37.110677437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:50:37.146707 systemd[1]: Started cri-containerd-65622d879171e8f7378dda70fbd0dc16b1c9de54abf841b11378fa2072fe89ce.scope - libcontainer container 65622d879171e8f7378dda70fbd0dc16b1c9de54abf841b11378fa2072fe89ce. Jan 24 00:50:37.167144 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:50:37.203718 containerd[1457]: time="2026-01-24T00:50:37.203623320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nj9sz,Uid:36347f68-65a6-43dc-ad00-76c0e9e937f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"65622d879171e8f7378dda70fbd0dc16b1c9de54abf841b11378fa2072fe89ce\"" Jan 24 00:50:37.204777 kubelet[2448]: E0124 00:50:37.204739 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:37.207712 containerd[1457]: time="2026-01-24T00:50:37.207622854Z" level=info msg="CreateContainer within sandbox \"65622d879171e8f7378dda70fbd0dc16b1c9de54abf841b11378fa2072fe89ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:50:37.229287 containerd[1457]: time="2026-01-24T00:50:37.229197729Z" level=info msg="CreateContainer within sandbox \"65622d879171e8f7378dda70fbd0dc16b1c9de54abf841b11378fa2072fe89ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0af5f66eafd91dac5f9a251efa4a766d5922d96b1986ee26a815e7e266655c5\"" Jan 24 00:50:37.229961 containerd[1457]: time="2026-01-24T00:50:37.229919094Z" level=info msg="StartContainer for \"f0af5f66eafd91dac5f9a251efa4a766d5922d96b1986ee26a815e7e266655c5\"" Jan 24 00:50:37.271681 systemd[1]: Started cri-containerd-f0af5f66eafd91dac5f9a251efa4a766d5922d96b1986ee26a815e7e266655c5.scope - libcontainer container f0af5f66eafd91dac5f9a251efa4a766d5922d96b1986ee26a815e7e266655c5. Jan 24 00:50:37.308371 containerd[1457]: time="2026-01-24T00:50:37.308264136Z" level=info msg="StartContainer for \"f0af5f66eafd91dac5f9a251efa4a766d5922d96b1986ee26a815e7e266655c5\" returns successfully" Jan 24 00:50:37.473721 systemd-networkd[1384]: cni0: Gained IPv6LL Jan 24 00:50:37.474279 systemd-networkd[1384]: vethfdea4f58: Gained IPv6LL Jan 24 00:50:38.080729 kubelet[2448]: E0124 00:50:38.080675 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:38.081178 kubelet[2448]: E0124 00:50:38.080736 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:38.108962 kubelet[2448]: I0124 00:50:38.108715 2448 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nj9sz" podStartSLOduration=19.108695327 podStartE2EDuration="19.108695327s" podCreationTimestamp="2026-01-24 00:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:50:38.093318071 +0000 UTC m=+24.276167140" watchObservedRunningTime="2026-01-24 00:50:38.108695327 +0000 UTC m=+24.291544396" Jan 24 00:50:38.818700 systemd-networkd[1384]: vethc4855a77: Gained IPv6LL Jan 24 00:50:39.082755 kubelet[2448]: E0124 00:50:39.082618 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:40.085122 kubelet[2448]: E0124 00:50:40.085048 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:50:41.905454 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:33600.service - OpenSSH per-connection server daemon (10.0.0.1:33600). Jan 24 00:50:41.951299 sshd[3436]: Accepted publickey for core from 10.0.0.1 port 33600 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:50:41.953585 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:50:41.959203 systemd-logind[1441]: New session 6 of user core. Jan 24 00:50:41.967663 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:50:42.099559 sshd[3436]: pam_unix(sshd:session): session closed for user core Jan 24 00:50:42.104612 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:33600.service: Deactivated successfully. Jan 24 00:50:42.107313 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:50:42.108517 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:50:42.110289 systemd-logind[1441]: Removed session 6. Jan 24 00:50:47.113827 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:37292.service - OpenSSH per-connection server daemon (10.0.0.1:37292). Jan 24 00:50:47.182839 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 37292 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:50:47.185187 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:50:47.191763 systemd-logind[1441]: New session 7 of user core. Jan 24 00:50:47.204861 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:50:47.339014 sshd[3474]: pam_unix(sshd:session): session closed for user core Jan 24 00:50:47.345275 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:37292.service: Deactivated successfully. Jan 24 00:50:47.347946 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:50:47.349010 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:50:47.350704 systemd-logind[1441]: Removed session 7. Jan 24 00:50:52.353034 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:37300.service - OpenSSH per-connection server daemon (10.0.0.1:37300). Jan 24 00:50:52.398532 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 37300 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:50:52.400712 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:50:52.406312 systemd-logind[1441]: New session 8 of user core. Jan 24 00:50:52.417610 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:50:52.540577 sshd[3512]: pam_unix(sshd:session): session closed for user core Jan 24 00:50:52.551847 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:37300.service: Deactivated successfully. Jan 24 00:50:52.554600 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:50:52.557461 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:50:52.567814 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:37306.service - OpenSSH per-connection server daemon (10.0.0.1:37306). Jan 24 00:50:52.569183 systemd-logind[1441]: Removed session 8. Jan 24 00:50:52.606112 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 37306 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:50:52.608138 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:50:52.614274 systemd-logind[1441]: New session 9 of user core. Jan 24 00:50:52.624664 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:50:52.802265 sshd[3528]: pam_unix(sshd:session): session closed for user core Jan 24 00:50:52.812773 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:37306.service: Deactivated successfully. Jan 24 00:50:52.814565 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:50:52.818308 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:50:52.827052 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:37308.service - OpenSSH per-connection server daemon (10.0.0.1:37308). Jan 24 00:50:52.828353 systemd-logind[1441]: Removed session 9. Jan 24 00:50:52.866245 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 37308 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:50:52.868199 sshd[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:50:52.873489 systemd-logind[1441]: New session 10 of user core. Jan 24 00:50:52.880636 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:50:53.012006 sshd[3540]: pam_unix(sshd:session): session closed for user core Jan 24 00:50:53.016314 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:37308.service: Deactivated successfully. Jan 24 00:50:53.018687 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:50:53.019743 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:50:53.021083 systemd-logind[1441]: Removed session 10. Jan 24 00:50:58.024567 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:55676.service - OpenSSH per-connection server daemon (10.0.0.1:55676). Jan 24 00:50:58.064321 sshd[3576]: Accepted publickey for core from 10.0.0.1 port 55676 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:50:58.065857 sshd[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:50:58.070358 systemd-logind[1441]: New session 11 of user core. Jan 24 00:50:58.080572 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:50:58.184520 sshd[3576]: pam_unix(sshd:session): session closed for user core Jan 24 00:50:58.195051 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:55676.service: Deactivated successfully. Jan 24 00:50:58.196782 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:50:58.198236 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:50:58.199550 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:55686.service - OpenSSH per-connection server daemon (10.0.0.1:55686). Jan 24 00:50:58.200356 systemd-logind[1441]: Removed session 11. Jan 24 00:50:58.239030 sshd[3590]: Accepted publickey for core from 10.0.0.1 port 55686 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:50:58.240744 sshd[3590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:50:58.245024 systemd-logind[1441]: New session 12 of user core. Jan 24 00:50:58.251558 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:50:58.454666 sshd[3590]: pam_unix(sshd:session): session closed for user core Jan 24 00:50:58.471170 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:55686.service: Deactivated successfully. Jan 24 00:50:58.473210 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:50:58.475354 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:50:58.484718 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:55698.service - OpenSSH per-connection server daemon (10.0.0.1:55698). Jan 24 00:50:58.486001 systemd-logind[1441]: Removed session 12. Jan 24 00:50:58.523360 sshd[3603]: Accepted publickey for core from 10.0.0.1 port 55698 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:50:58.525070 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:50:58.530227 systemd-logind[1441]: New session 13 of user core. Jan 24 00:50:58.541736 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:50:59.068855 sshd[3603]: pam_unix(sshd:session): session closed for user core Jan 24 00:50:59.079617 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:55698.service: Deactivated successfully. Jan 24 00:50:59.082757 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:50:59.085132 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:50:59.101969 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:55704.service - OpenSSH per-connection server daemon (10.0.0.1:55704). Jan 24 00:50:59.103452 systemd-logind[1441]: Removed session 13. Jan 24 00:50:59.138286 sshd[3623]: Accepted publickey for core from 10.0.0.1 port 55704 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:50:59.140155 sshd[3623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:50:59.145284 systemd-logind[1441]: New session 14 of user core. Jan 24 00:50:59.159552 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:50:59.371687 sshd[3623]: pam_unix(sshd:session): session closed for user core Jan 24 00:50:59.380993 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:55704.service: Deactivated successfully. Jan 24 00:50:59.384154 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:50:59.386094 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:50:59.402005 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:55712.service - OpenSSH per-connection server daemon (10.0.0.1:55712). Jan 24 00:50:59.403733 systemd-logind[1441]: Removed session 14. Jan 24 00:50:59.438304 sshd[3635]: Accepted publickey for core from 10.0.0.1 port 55712 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:50:59.440493 sshd[3635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:50:59.447202 systemd-logind[1441]: New session 15 of user core. Jan 24 00:50:59.452718 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:50:59.566967 sshd[3635]: pam_unix(sshd:session): session closed for user core Jan 24 00:50:59.570691 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:55712.service: Deactivated successfully. Jan 24 00:50:59.573740 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:50:59.576311 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:50:59.578262 systemd-logind[1441]: Removed session 15. Jan 24 00:51:04.599769 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:52760.service - OpenSSH per-connection server daemon (10.0.0.1:52760). Jan 24 00:51:04.637506 sshd[3671]: Accepted publickey for core from 10.0.0.1 port 52760 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:51:04.638896 sshd[3671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:04.644481 systemd-logind[1441]: New session 16 of user core. Jan 24 00:51:04.651639 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:51:04.768529 sshd[3671]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:04.772729 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:52760.service: Deactivated successfully. Jan 24 00:51:04.774886 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:51:04.775779 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:51:04.777148 systemd-logind[1441]: Removed session 16. Jan 24 00:51:09.782139 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:52768.service - OpenSSH per-connection server daemon (10.0.0.1:52768). Jan 24 00:51:09.827606 sshd[3708]: Accepted publickey for core from 10.0.0.1 port 52768 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:51:09.829667 sshd[3708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:09.835823 systemd-logind[1441]: New session 17 of user core. Jan 24 00:51:09.850719 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:51:09.967148 sshd[3708]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:09.971510 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:52768.service: Deactivated successfully. Jan 24 00:51:09.973813 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:51:09.974922 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:51:09.976361 systemd-logind[1441]: Removed session 17. Jan 24 00:51:14.981945 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:52662.service - OpenSSH per-connection server daemon (10.0.0.1:52662). Jan 24 00:51:15.029958 sshd[3746]: Accepted publickey for core from 10.0.0.1 port 52662 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:51:15.032272 sshd[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:15.037474 systemd-logind[1441]: New session 18 of user core. Jan 24 00:51:15.047819 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:51:15.156508 sshd[3746]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:15.160291 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:52662.service: Deactivated successfully. Jan 24 00:51:15.162193 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:51:15.163211 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:51:15.164638 systemd-logind[1441]: Removed session 18. Jan 24 00:51:20.168880 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:52666.service - OpenSSH per-connection server daemon (10.0.0.1:52666). Jan 24 00:51:20.210260 sshd[3781]: Accepted publickey for core from 10.0.0.1 port 52666 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:51:20.212335 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:20.218316 systemd-logind[1441]: New session 19 of user core. Jan 24 00:51:20.227636 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:51:20.342664 sshd[3781]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:20.346883 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:52666.service: Deactivated successfully. Jan 24 00:51:20.348803 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:51:20.349616 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:51:20.351029 systemd-logind[1441]: Removed session 19.