Jan 20 06:38:53.997969 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 04:11:16 -00 2026 Jan 20 06:38:53.998010 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:38:53.998025 kernel: BIOS-provided physical RAM map: Jan 20 06:38:53.998033 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 06:38:53.998040 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 06:38:53.998047 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 06:38:53.998056 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 20 06:38:53.998067 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 20 06:38:53.998075 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 06:38:53.998082 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 06:38:53.998090 kernel: NX (Execute Disable) protection: active Jan 20 06:38:53.998100 kernel: APIC: Static calls initialized Jan 20 06:38:53.998107 kernel: SMBIOS 2.8 present. Jan 20 06:38:53.998115 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 20 06:38:53.998124 kernel: DMI: Memory slots populated: 1/1 Jan 20 06:38:53.998133 kernel: Hypervisor detected: KVM Jan 20 06:38:53.998146 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 20 06:38:53.998155 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 06:38:53.998164 kernel: kvm-clock: using sched offset of 3992434482 cycles Jan 20 06:38:53.998174 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 06:38:53.998183 kernel: tsc: Detected 2494.138 MHz processor Jan 20 06:38:53.998192 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 06:38:53.998201 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 06:38:53.998213 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 20 06:38:53.998222 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 06:38:53.998231 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 06:38:53.998240 kernel: ACPI: Early table checksum verification disabled Jan 20 06:38:53.998248 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 20 06:38:53.998257 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:38:53.998266 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:38:53.998275 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:38:53.998286 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 20 06:38:53.998295 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:38:53.998304 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:38:53.998313 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:38:53.998322 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:38:53.998330 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Jan 20 06:38:53.998339 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Jan 20 06:38:53.998350 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 20 06:38:53.998359 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Jan 20 06:38:53.998372 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Jan 20 06:38:53.998381 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Jan 20 06:38:53.998390 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Jan 20 06:38:53.998401 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 20 06:38:53.998411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 20 06:38:53.998420 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Jan 20 06:38:53.998445 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Jan 20 06:38:53.998454 kernel: Zone ranges: Jan 20 06:38:53.998463 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 06:38:53.998475 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 20 06:38:53.998484 kernel: Normal empty Jan 20 06:38:53.998493 kernel: Device empty Jan 20 06:38:53.998502 kernel: Movable zone start for each node Jan 20 06:38:53.998511 kernel: Early memory node ranges Jan 20 06:38:53.998520 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 06:38:53.998529 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 20 06:38:53.998539 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 20 06:38:53.998552 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 06:38:53.998561 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 06:38:53.998571 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 20 06:38:53.998583 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 06:38:53.998592 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 06:38:53.998603 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 06:38:53.998612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 06:38:53.998624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 06:38:53.998633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 06:38:53.998644 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 06:38:53.998653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 06:38:53.998663 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 06:38:53.998672 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 06:38:53.998681 kernel: TSC deadline timer available Jan 20 06:38:53.998692 kernel: CPU topo: Max. logical packages: 1 Jan 20 06:38:53.998701 kernel: CPU topo: Max. logical dies: 1 Jan 20 06:38:53.998711 kernel: CPU topo: Max. dies per package: 1 Jan 20 06:38:53.998720 kernel: CPU topo: Max. threads per core: 1 Jan 20 06:38:53.998728 kernel: CPU topo: Num. cores per package: 2 Jan 20 06:38:53.998738 kernel: CPU topo: Num. threads per package: 2 Jan 20 06:38:53.998746 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 20 06:38:53.998755 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 06:38:53.998767 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 20 06:38:53.998776 kernel: Booting paravirtualized kernel on KVM Jan 20 06:38:53.998786 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 06:38:53.998795 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 20 06:38:53.998816 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 20 06:38:53.998825 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 20 06:38:53.998834 kernel: pcpu-alloc: [0] 0 1 Jan 20 06:38:53.998846 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 20 06:38:53.998857 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:38:53.998866 kernel: random: crng init done Jan 20 06:38:53.998875 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 06:38:53.998885 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 20 06:38:53.998893 kernel: Fallback order for Node 0: 0 Jan 20 06:38:53.998902 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Jan 20 06:38:53.998914 kernel: Policy zone: DMA32 Jan 20 06:38:53.998923 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 06:38:53.998932 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 06:38:53.998941 kernel: Kernel/User page tables isolation: enabled Jan 20 06:38:53.998950 kernel: ftrace: allocating 40128 entries in 157 pages Jan 20 06:38:53.998959 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 06:38:53.998969 kernel: Dynamic Preempt: voluntary Jan 20 06:38:53.998981 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 06:38:53.998991 kernel: rcu: RCU event tracing is enabled. Jan 20 06:38:53.999000 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 06:38:53.999009 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 06:38:53.999018 kernel: Rude variant of Tasks RCU enabled. Jan 20 06:38:53.999028 kernel: Tracing variant of Tasks RCU enabled. Jan 20 06:38:53.999036 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 06:38:53.999045 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 06:38:53.999057 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 06:38:53.999069 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 06:38:53.999078 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 06:38:53.999088 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 20 06:38:53.999097 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 06:38:53.999106 kernel: Console: colour VGA+ 80x25 Jan 20 06:38:53.999115 kernel: printk: legacy console [tty0] enabled Jan 20 06:38:53.999127 kernel: printk: legacy console [ttyS0] enabled Jan 20 06:38:53.999136 kernel: ACPI: Core revision 20240827 Jan 20 06:38:53.999146 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 06:38:53.999163 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 06:38:53.999175 kernel: x2apic enabled Jan 20 06:38:53.999185 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 06:38:53.999195 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 06:38:53.999205 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 20 06:38:53.999217 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jan 20 06:38:53.999229 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 20 06:38:53.999239 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 20 06:38:53.999249 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 06:38:53.999258 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 06:38:53.999271 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 06:38:53.999281 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 20 06:38:53.999291 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 20 06:38:53.999301 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 20 06:38:53.999311 kernel: MDS: Mitigation: Clear CPU buffers Jan 20 06:38:53.999321 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 06:38:53.999330 kernel: active return thunk: its_return_thunk Jan 20 06:38:53.999342 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 20 06:38:53.999352 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 06:38:53.999361 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 06:38:53.999371 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 06:38:53.999380 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 06:38:53.999390 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 20 06:38:53.999400 kernel: Freeing SMP alternatives memory: 32K Jan 20 06:38:53.999412 kernel: pid_max: default: 32768 minimum: 301 Jan 20 06:38:53.999421 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 06:38:53.999431 kernel: landlock: Up and running. Jan 20 06:38:53.999441 kernel: SELinux: Initializing. Jan 20 06:38:53.999450 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 20 06:38:53.999460 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 20 06:38:53.999470 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 20 06:38:53.999482 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 20 06:38:53.999492 kernel: signal: max sigframe size: 1776 Jan 20 06:38:53.999501 kernel: rcu: Hierarchical SRCU implementation. Jan 20 06:38:53.999511 kernel: rcu: Max phase no-delay instances is 400. Jan 20 06:38:53.999521 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 06:38:53.999530 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 06:38:53.999540 kernel: smp: Bringing up secondary CPUs ... Jan 20 06:38:53.999556 kernel: smpboot: x86: Booting SMP configuration: Jan 20 06:38:53.999565 kernel: .... node #0, CPUs: #1 Jan 20 06:38:53.999575 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 06:38:53.999584 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jan 20 06:38:53.999594 kernel: Memory: 1983288K/2096612K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 108760K reserved, 0K cma-reserved) Jan 20 06:38:53.999604 kernel: devtmpfs: initialized Jan 20 06:38:53.999613 kernel: x86/mm: Memory block size: 128MB Jan 20 06:38:53.999626 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 06:38:53.999636 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 06:38:53.999645 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 06:38:53.999655 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 06:38:53.999665 kernel: audit: initializing netlink subsys (disabled) Jan 20 06:38:53.999674 kernel: audit: type=2000 audit(1768891131.200:1): state=initialized audit_enabled=0 res=1 Jan 20 06:38:53.999684 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 06:38:53.999696 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 06:38:53.999706 kernel: cpuidle: using governor menu Jan 20 06:38:53.999716 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 06:38:53.999725 kernel: dca service started, version 1.12.1 Jan 20 06:38:53.999734 kernel: PCI: Using configuration type 1 for base access Jan 20 06:38:53.999744 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 06:38:53.999754 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 06:38:53.999766 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 06:38:53.999775 kernel: ACPI: Added _OSI(Module Device) Jan 20 06:38:53.999785 kernel: ACPI: Added _OSI(Processor Device) Jan 20 06:38:53.999795 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 06:38:53.999892 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 06:38:53.999902 kernel: ACPI: Interpreter enabled Jan 20 06:38:53.999912 kernel: ACPI: PM: (supports S0 S5) Jan 20 06:38:53.999926 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 06:38:53.999936 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 06:38:53.999946 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 06:38:53.999955 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 20 06:38:53.999965 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 06:38:54.000200 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 20 06:38:54.000341 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 20 06:38:54.000482 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 20 06:38:54.000495 kernel: acpiphp: Slot [3] registered Jan 20 06:38:54.000505 kernel: acpiphp: Slot [4] registered Jan 20 06:38:54.000516 kernel: acpiphp: Slot [5] registered Jan 20 06:38:54.000526 kernel: acpiphp: Slot [6] registered Jan 20 06:38:54.000535 kernel: acpiphp: Slot [7] registered Jan 20 06:38:54.000548 kernel: acpiphp: Slot [8] registered Jan 20 06:38:54.000558 kernel: acpiphp: Slot [9] registered Jan 20 06:38:54.000568 kernel: acpiphp: Slot [10] registered Jan 20 06:38:54.000578 kernel: acpiphp: Slot [11] registered Jan 20 06:38:54.000587 kernel: acpiphp: Slot [12] registered Jan 20 06:38:54.000597 kernel: acpiphp: Slot [13] registered Jan 20 06:38:54.000607 kernel: acpiphp: Slot [14] registered Jan 20 06:38:54.000617 kernel: acpiphp: Slot [15] registered Jan 20 06:38:54.000629 kernel: acpiphp: Slot [16] registered Jan 20 06:38:54.000638 kernel: acpiphp: Slot [17] registered Jan 20 06:38:54.000648 kernel: acpiphp: Slot [18] registered Jan 20 06:38:54.000657 kernel: acpiphp: Slot [19] registered Jan 20 06:38:54.000667 kernel: acpiphp: Slot [20] registered Jan 20 06:38:54.000677 kernel: acpiphp: Slot [21] registered Jan 20 06:38:54.000686 kernel: acpiphp: Slot [22] registered Jan 20 06:38:54.000699 kernel: acpiphp: Slot [23] registered Jan 20 06:38:54.000709 kernel: acpiphp: Slot [24] registered Jan 20 06:38:54.000718 kernel: acpiphp: Slot [25] registered Jan 20 06:38:54.000728 kernel: acpiphp: Slot [26] registered Jan 20 06:38:54.000738 kernel: acpiphp: Slot [27] registered Jan 20 06:38:54.000747 kernel: acpiphp: Slot [28] registered Jan 20 06:38:54.000757 kernel: acpiphp: Slot [29] registered Jan 20 06:38:54.000770 kernel: acpiphp: Slot [30] registered Jan 20 06:38:54.000780 kernel: acpiphp: Slot [31] registered Jan 20 06:38:54.000789 kernel: PCI host bridge to bus 0000:00 Jan 20 06:38:54.000942 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 06:38:54.001099 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 06:38:54.001223 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 06:38:54.001342 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 20 06:38:54.001466 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 20 06:38:54.001584 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 06:38:54.001740 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 20 06:38:54.001921 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 20 06:38:54.002068 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jan 20 06:38:54.002249 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Jan 20 06:38:54.002384 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jan 20 06:38:54.004709 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jan 20 06:38:54.004909 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jan 20 06:38:54.005047 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jan 20 06:38:54.005194 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 20 06:38:54.005341 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Jan 20 06:38:54.005485 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jan 20 06:38:54.005620 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 20 06:38:54.005753 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 20 06:38:54.007106 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 20 06:38:54.007282 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jan 20 06:38:54.007412 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Jan 20 06:38:54.007540 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Jan 20 06:38:54.007668 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Jan 20 06:38:54.007795 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 06:38:54.007952 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 06:38:54.008087 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Jan 20 06:38:54.008213 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Jan 20 06:38:54.008340 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Jan 20 06:38:54.008514 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 06:38:54.008672 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Jan 20 06:38:54.009123 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Jan 20 06:38:54.009263 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 20 06:38:54.009405 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 20 06:38:54.009539 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Jan 20 06:38:54.009670 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Jan 20 06:38:54.009823 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 20 06:38:54.010002 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 06:38:54.010136 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Jan 20 06:38:54.010267 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Jan 20 06:38:54.010400 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Jan 20 06:38:54.010574 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 06:38:54.010715 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Jan 20 06:38:54.011024 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Jan 20 06:38:54.011167 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Jan 20 06:38:54.011314 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 06:38:54.011450 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Jan 20 06:38:54.011592 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 20 06:38:54.011606 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 06:38:54.011616 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 06:38:54.011626 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 06:38:54.011636 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 06:38:54.011646 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 20 06:38:54.011656 kernel: iommu: Default domain type: Translated Jan 20 06:38:54.011669 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 06:38:54.011679 kernel: PCI: Using ACPI for IRQ routing Jan 20 06:38:54.011689 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 06:38:54.011699 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 06:38:54.011709 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 20 06:38:54.011862 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 20 06:38:54.011999 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 20 06:38:54.012138 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 06:38:54.012151 kernel: vgaarb: loaded Jan 20 06:38:54.012160 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 06:38:54.012170 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 06:38:54.012181 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 06:38:54.012190 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 06:38:54.012200 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 06:38:54.012214 kernel: pnp: PnP ACPI init Jan 20 06:38:54.012223 kernel: pnp: PnP ACPI: found 4 devices Jan 20 06:38:54.012233 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 06:38:54.014115 kernel: NET: Registered PF_INET protocol family Jan 20 06:38:54.014133 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 06:38:54.014147 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 20 06:38:54.014161 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 06:38:54.014175 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 20 06:38:54.014197 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 20 06:38:54.014210 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 20 06:38:54.014224 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 20 06:38:54.014238 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 20 06:38:54.014251 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 06:38:54.014264 kernel: NET: Registered PF_XDP protocol family Jan 20 06:38:54.014539 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 06:38:54.014733 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 06:38:54.015783 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 06:38:54.015937 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 20 06:38:54.016058 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 20 06:38:54.016243 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 20 06:38:54.016448 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 20 06:38:54.016473 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 20 06:38:54.016629 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 30035 usecs Jan 20 06:38:54.016646 kernel: PCI: CLS 0 bytes, default 64 Jan 20 06:38:54.016656 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 20 06:38:54.016667 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 20 06:38:54.016677 kernel: Initialise system trusted keyrings Jan 20 06:38:54.016687 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 20 06:38:54.016701 kernel: Key type asymmetric registered Jan 20 06:38:54.016711 kernel: Asymmetric key parser 'x509' registered Jan 20 06:38:54.016726 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 06:38:54.016739 kernel: io scheduler mq-deadline registered Jan 20 06:38:54.016754 kernel: io scheduler kyber registered Jan 20 06:38:54.016770 kernel: io scheduler bfq registered Jan 20 06:38:54.016786 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 06:38:54.020352 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 20 06:38:54.020368 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 20 06:38:54.020379 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 20 06:38:54.020389 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 06:38:54.020400 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 06:38:54.020410 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 06:38:54.020421 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 06:38:54.020441 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 06:38:54.020452 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 06:38:54.020705 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 20 06:38:54.021014 kernel: rtc_cmos 00:03: registered as rtc0 Jan 20 06:38:54.021151 kernel: rtc_cmos 00:03: setting system clock to 2026-01-20T06:38:52 UTC (1768891132) Jan 20 06:38:54.021279 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 20 06:38:54.021299 kernel: intel_pstate: CPU model not supported Jan 20 06:38:54.021310 kernel: NET: Registered PF_INET6 protocol family Jan 20 06:38:54.021321 kernel: Segment Routing with IPv6 Jan 20 06:38:54.021331 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 06:38:54.021341 kernel: NET: Registered PF_PACKET protocol family Jan 20 06:38:54.021352 kernel: Key type dns_resolver registered Jan 20 06:38:54.021363 kernel: IPI shorthand broadcast: enabled Jan 20 06:38:54.021376 kernel: sched_clock: Marking stable (1809005230, 166752292)->(2109723698, -133966176) Jan 20 06:38:54.021386 kernel: registered taskstats version 1 Jan 20 06:38:54.021396 kernel: Loading compiled-in X.509 certificates Jan 20 06:38:54.021407 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3e9049adf8f1d71dd06c731465288f6e1d353052' Jan 20 06:38:54.021417 kernel: Demotion targets for Node 0: null Jan 20 06:38:54.021427 kernel: Key type .fscrypt registered Jan 20 06:38:54.021437 kernel: Key type fscrypt-provisioning registered Jan 20 06:38:54.021466 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 06:38:54.021479 kernel: ima: Allocated hash algorithm: sha1 Jan 20 06:38:54.021490 kernel: ima: No architecture policies found Jan 20 06:38:54.021500 kernel: clk: Disabling unused clocks Jan 20 06:38:54.021511 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 20 06:38:54.021522 kernel: Write protecting the kernel read-only data: 47104k Jan 20 06:38:54.021533 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 20 06:38:54.021546 kernel: Run /init as init process Jan 20 06:38:54.021556 kernel: with arguments: Jan 20 06:38:54.021567 kernel: /init Jan 20 06:38:54.021578 kernel: with environment: Jan 20 06:38:54.021588 kernel: HOME=/ Jan 20 06:38:54.021598 kernel: TERM=linux Jan 20 06:38:54.021608 kernel: SCSI subsystem initialized Jan 20 06:38:54.021619 kernel: libata version 3.00 loaded. Jan 20 06:38:54.021773 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 20 06:38:54.021953 kernel: scsi host0: ata_piix Jan 20 06:38:54.022101 kernel: scsi host1: ata_piix Jan 20 06:38:54.022115 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Jan 20 06:38:54.022126 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Jan 20 06:38:54.022141 kernel: ACPI: bus type USB registered Jan 20 06:38:54.022152 kernel: usbcore: registered new interface driver usbfs Jan 20 06:38:54.022162 kernel: usbcore: registered new interface driver hub Jan 20 06:38:54.022173 kernel: usbcore: registered new device driver usb Jan 20 06:38:54.022312 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 20 06:38:54.022518 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 20 06:38:54.022659 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 20 06:38:54.024857 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 20 06:38:54.025154 kernel: hub 1-0:1.0: USB hub found Jan 20 06:38:54.025313 kernel: hub 1-0:1.0: 2 ports detected Jan 20 06:38:54.025482 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 20 06:38:54.025619 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 20 06:38:54.025634 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 06:38:54.025645 kernel: GPT:16515071 != 125829119 Jan 20 06:38:54.025656 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 06:38:54.025666 kernel: GPT:16515071 != 125829119 Jan 20 06:38:54.025677 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 06:38:54.025691 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 06:38:54.025849 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 20 06:38:54.025986 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jan 20 06:38:54.026126 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Jan 20 06:38:54.026281 kernel: scsi host2: Virtio SCSI HBA Jan 20 06:38:54.026301 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 06:38:54.026315 kernel: device-mapper: uevent: version 1.0.3 Jan 20 06:38:54.026327 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 06:38:54.026338 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 06:38:54.026348 kernel: raid6: avx2x4 gen() 16618 MB/s Jan 20 06:38:54.026359 kernel: raid6: avx2x2 gen() 16876 MB/s Jan 20 06:38:54.026369 kernel: raid6: avx2x1 gen() 12251 MB/s Jan 20 06:38:54.026383 kernel: raid6: using algorithm avx2x2 gen() 16876 MB/s Jan 20 06:38:54.026394 kernel: raid6: .... xor() 20401 MB/s, rmw enabled Jan 20 06:38:54.026405 kernel: raid6: using avx2x2 recovery algorithm Jan 20 06:38:54.026416 kernel: xor: automatically using best checksumming function avx Jan 20 06:38:54.026444 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 06:38:54.026455 kernel: BTRFS: device fsid 98f50efd-4872-4dd8-af35-5e494490b9aa devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (162) Jan 20 06:38:54.026466 kernel: BTRFS info (device dm-0): first mount of filesystem 98f50efd-4872-4dd8-af35-5e494490b9aa Jan 20 06:38:54.026480 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:38:54.026491 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 06:38:54.026502 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 06:38:54.026512 kernel: loop: module loaded Jan 20 06:38:54.026523 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 06:38:54.026534 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 06:38:54.026547 systemd[1]: Successfully made /usr/ read-only. Jan 20 06:38:54.026565 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 06:38:54.026576 systemd[1]: Detected virtualization kvm. Jan 20 06:38:54.026587 systemd[1]: Detected architecture x86-64. Jan 20 06:38:54.026598 systemd[1]: Running in initrd. Jan 20 06:38:54.026609 systemd[1]: No hostname configured, using default hostname. Jan 20 06:38:54.026621 systemd[1]: Hostname set to . Jan 20 06:38:54.026634 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 06:38:54.026646 systemd[1]: Queued start job for default target initrd.target. Jan 20 06:38:54.026657 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 06:38:54.026668 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:38:54.026679 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:38:54.026691 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 06:38:54.026705 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 06:38:54.026717 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 06:38:54.026728 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 06:38:54.026739 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:38:54.026750 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:38:54.026761 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 06:38:54.026775 systemd[1]: Reached target paths.target - Path Units. Jan 20 06:38:54.026786 systemd[1]: Reached target slices.target - Slice Units. Jan 20 06:38:54.027838 systemd[1]: Reached target swap.target - Swaps. Jan 20 06:38:54.027872 systemd[1]: Reached target timers.target - Timer Units. Jan 20 06:38:54.027889 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 06:38:54.027907 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 06:38:54.027926 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:38:54.027951 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 06:38:54.027968 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 06:38:54.027987 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:38:54.028005 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 06:38:54.028023 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:38:54.028040 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 06:38:54.028059 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 06:38:54.028082 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 06:38:54.028099 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 06:38:54.028118 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 06:38:54.028136 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 06:38:54.028152 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 06:38:54.028168 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 06:38:54.028187 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 06:38:54.028205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:38:54.028221 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 06:38:54.028310 systemd-journald[298]: Collecting audit messages is enabled. Jan 20 06:38:54.028351 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:38:54.028369 kernel: audit: type=1130 audit(1768891133.995:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.028386 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 06:38:54.028406 kernel: audit: type=1130 audit(1768891134.002:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.028422 kernel: audit: type=1130 audit(1768891134.005:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.028438 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 06:38:54.028455 systemd-journald[298]: Journal started Jan 20 06:38:54.028486 systemd-journald[298]: Runtime Journal (/run/log/journal/504741c20b604c84ac3edb3be3b3a4d1) is 4.8M, max 39.1M, 34.2M free. Jan 20 06:38:53.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.033053 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 06:38:54.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.039833 kernel: audit: type=1130 audit(1768891134.033:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.044766 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 06:38:54.068218 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 06:38:54.127680 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 06:38:54.127724 kernel: Bridge firewalling registered Jan 20 06:38:54.079866 systemd-modules-load[300]: Inserted module 'br_netfilter' Jan 20 06:38:54.142047 kernel: audit: type=1130 audit(1768891134.126:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.142083 kernel: audit: type=1130 audit(1768891134.127:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.142098 kernel: audit: type=1130 audit(1768891134.136:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.085139 systemd-tmpfiles[311]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 06:38:54.147483 kernel: audit: type=1130 audit(1768891134.142:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.128048 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 06:38:54.136501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:38:54.141176 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:38:54.148979 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 06:38:54.152194 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 06:38:54.154243 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 06:38:54.175163 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:38:54.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.180828 kernel: audit: type=1130 audit(1768891134.175:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.182789 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:38:54.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.186000 audit: BPF prog-id=6 op=LOAD Jan 20 06:38:54.189401 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 06:38:54.193383 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 06:38:54.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.198031 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 06:38:54.227832 dracut-cmdline[335]: dracut-109 Jan 20 06:38:54.235718 dracut-cmdline[335]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:38:54.264414 systemd-resolved[334]: Positive Trust Anchors: Jan 20 06:38:54.265237 systemd-resolved[334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 06:38:54.265888 systemd-resolved[334]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 06:38:54.265929 systemd-resolved[334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 06:38:54.316929 systemd-resolved[334]: Defaulting to hostname 'linux'. Jan 20 06:38:54.318767 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 06:38:54.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.320235 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:38:54.377846 kernel: Loading iSCSI transport class v2.0-870. Jan 20 06:38:54.400892 kernel: iscsi: registered transport (tcp) Jan 20 06:38:54.436895 kernel: iscsi: registered transport (qla4xxx) Jan 20 06:38:54.436988 kernel: QLogic iSCSI HBA Driver Jan 20 06:38:54.479213 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 06:38:54.511609 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:38:54.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.514852 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 06:38:54.596758 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 06:38:54.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.600198 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 06:38:54.601999 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 06:38:54.650755 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 06:38:54.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.652000 audit: BPF prog-id=7 op=LOAD Jan 20 06:38:54.652000 audit: BPF prog-id=8 op=LOAD Jan 20 06:38:54.655054 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:38:54.695861 systemd-udevd[566]: Using default interface naming scheme 'v257'. Jan 20 06:38:54.711194 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:38:54.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.716234 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 06:38:54.756331 dracut-pre-trigger[629]: rd.md=0: removing MD RAID activation Jan 20 06:38:54.777253 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 06:38:54.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.779000 audit: BPF prog-id=9 op=LOAD Jan 20 06:38:54.783023 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 06:38:54.806652 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 06:38:54.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.811300 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 06:38:54.844824 systemd-networkd[688]: lo: Link UP Jan 20 06:38:54.844838 systemd-networkd[688]: lo: Gained carrier Jan 20 06:38:54.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.846644 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 06:38:54.847416 systemd[1]: Reached target network.target - Network. Jan 20 06:38:54.921460 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:38:54.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:54.925367 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 06:38:55.055348 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 06:38:55.067775 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 06:38:55.096618 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 06:38:55.099045 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 06:38:55.135731 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 06:38:55.147033 disk-uuid[734]: Primary Header is updated. Jan 20 06:38:55.147033 disk-uuid[734]: Secondary Entries is updated. Jan 20 06:38:55.147033 disk-uuid[734]: Secondary Header is updated. Jan 20 06:38:55.177984 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 06:38:55.189404 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 06:38:55.226875 kernel: AES CTR mode by8 optimization enabled Jan 20 06:38:55.291662 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:38:55.294001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:38:55.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:55.295766 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:38:55.301034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:38:55.348397 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:38:55.348408 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 06:38:55.351069 systemd-networkd[688]: eth1: Link UP Jan 20 06:38:55.351883 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Jan 20 06:38:55.351889 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 20 06:38:55.353467 systemd-networkd[688]: eth1: Gained carrier Jan 20 06:38:55.353487 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:38:55.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:55.358042 systemd-networkd[688]: eth0: Link UP Jan 20 06:38:55.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:55.358559 systemd-networkd[688]: eth0: Gained carrier Jan 20 06:38:55.358579 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Jan 20 06:38:55.368964 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.51/20 acquired from 169.254.169.253 Jan 20 06:38:55.377888 systemd-networkd[688]: eth0: DHCPv4 address 164.92.87.233/20, gateway 164.92.80.1 acquired from 169.254.169.253 Jan 20 06:38:55.456272 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 06:38:55.457610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:38:55.461492 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 06:38:55.462771 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:38:55.464135 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 06:38:55.467106 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 06:38:55.500604 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 06:38:55.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.227260 disk-uuid[736]: Warning: The kernel is still using the old partition table. Jan 20 06:38:56.227260 disk-uuid[736]: The new table will be used at the next reboot or after you Jan 20 06:38:56.227260 disk-uuid[736]: run partprobe(8) or kpartx(8) Jan 20 06:38:56.227260 disk-uuid[736]: The operation has completed successfully. Jan 20 06:38:56.239575 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 06:38:56.239872 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 06:38:56.247595 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 20 06:38:56.247711 kernel: audit: type=1130 audit(1768891136.240:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.244155 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 06:38:56.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.253877 kernel: audit: type=1131 audit(1768891136.240:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.301087 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (828) Jan 20 06:38:56.301219 kernel: BTRFS info (device vda6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:38:56.304121 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:38:56.310227 kernel: BTRFS info (device vda6): turning on async discard Jan 20 06:38:56.310350 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 06:38:56.321860 kernel: BTRFS info (device vda6): last unmount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:38:56.324108 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 06:38:56.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.327143 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 06:38:56.333155 kernel: audit: type=1130 audit(1768891136.324:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.598903 ignition[847]: Ignition 2.24.0 Jan 20 06:38:56.598918 ignition[847]: Stage: fetch-offline Jan 20 06:38:56.598991 ignition[847]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:38:56.599005 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 20 06:38:56.603508 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 06:38:56.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.599145 ignition[847]: parsed url from cmdline: "" Jan 20 06:38:56.607225 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 06:38:56.611188 kernel: audit: type=1130 audit(1768891136.603:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.599149 ignition[847]: no config URL provided Jan 20 06:38:56.599243 ignition[847]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 06:38:56.599257 ignition[847]: no config at "/usr/lib/ignition/user.ign" Jan 20 06:38:56.599264 ignition[847]: failed to fetch config: resource requires networking Jan 20 06:38:56.601425 ignition[847]: Ignition finished successfully Jan 20 06:38:56.654611 ignition[856]: Ignition 2.24.0 Jan 20 06:38:56.654626 ignition[856]: Stage: fetch Jan 20 06:38:56.654880 ignition[856]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:38:56.654891 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 20 06:38:56.655041 ignition[856]: parsed url from cmdline: "" Jan 20 06:38:56.655045 ignition[856]: no config URL provided Jan 20 06:38:56.655062 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 06:38:56.655072 ignition[856]: no config at "/usr/lib/ignition/user.ign" Jan 20 06:38:56.655103 ignition[856]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 20 06:38:56.660992 systemd-networkd[688]: eth1: Gained IPv6LL Jan 20 06:38:56.686082 ignition[856]: GET result: OK Jan 20 06:38:56.687201 ignition[856]: parsing config with SHA512: 7d4ed99a374d443e6844512a57ad1299c0ab750525e8ab4ff2db976379cb5aed2df61d415cc541864efe5fc44ae38560b69d6117b9098d1249fd86a6606f6b99 Jan 20 06:38:56.699061 unknown[856]: fetched base config from "system" Jan 20 06:38:56.699076 unknown[856]: fetched base config from "system" Jan 20 06:38:56.699464 ignition[856]: fetch: fetch complete Jan 20 06:38:56.699083 unknown[856]: fetched user config from "digitalocean" Jan 20 06:38:56.699470 ignition[856]: fetch: fetch passed Jan 20 06:38:56.699539 ignition[856]: Ignition finished successfully Jan 20 06:38:56.704227 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 06:38:56.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.706990 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 06:38:56.710140 kernel: audit: type=1130 audit(1768891136.704:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.755778 ignition[862]: Ignition 2.24.0 Jan 20 06:38:56.755792 ignition[862]: Stage: kargs Jan 20 06:38:56.756085 ignition[862]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:38:56.756107 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 20 06:38:56.757368 ignition[862]: kargs: kargs passed Jan 20 06:38:56.759931 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 06:38:56.757426 ignition[862]: Ignition finished successfully Jan 20 06:38:56.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.763332 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 06:38:56.769124 kernel: audit: type=1130 audit(1768891136.759:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.802926 ignition[869]: Ignition 2.24.0 Jan 20 06:38:56.802945 ignition[869]: Stage: disks Jan 20 06:38:56.803262 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:38:56.803287 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 20 06:38:56.804832 ignition[869]: disks: disks passed Jan 20 06:38:56.806554 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 06:38:56.813032 kernel: audit: type=1130 audit(1768891136.806:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.804915 ignition[869]: Ignition finished successfully Jan 20 06:38:56.808014 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 06:38:56.813528 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 06:38:56.814599 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 06:38:56.815723 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 06:38:56.817023 systemd[1]: Reached target basic.target - Basic System. Jan 20 06:38:56.819754 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 06:38:56.876257 systemd-fsck[878]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 20 06:38:56.880674 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 06:38:56.887724 kernel: audit: type=1130 audit(1768891136.880:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:56.882626 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 06:38:57.043843 kernel: EXT4-fs (vda9): mounted filesystem cccfbfd8-bb77-4a2f-9af9-c87f4957b904 r/w with ordered data mode. Quota mode: none. Jan 20 06:38:57.045058 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 06:38:57.046330 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 06:38:57.049221 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 06:38:57.051167 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 06:38:57.055007 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 20 06:38:57.066834 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 20 06:38:57.072007 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 06:38:57.072097 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 06:38:57.090497 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Jan 20 06:38:57.090635 kernel: BTRFS info (device vda6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:38:57.090661 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:38:57.098712 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 06:38:57.102369 kernel: BTRFS info (device vda6): turning on async discard Jan 20 06:38:57.102526 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 06:38:57.104815 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 06:38:57.119218 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 06:38:57.173116 systemd-networkd[688]: eth0: Gained IPv6LL Jan 20 06:38:57.191006 coreos-metadata[889]: Jan 20 06:38:57.190 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 20 06:38:57.201837 coreos-metadata[889]: Jan 20 06:38:57.201 INFO Fetch successful Jan 20 06:38:57.214109 coreos-metadata[888]: Jan 20 06:38:57.213 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 20 06:38:57.215558 coreos-metadata[889]: Jan 20 06:38:57.215 INFO wrote hostname ci-4585.0.0-n-f46ee37080 to /sysroot/etc/hostname Jan 20 06:38:57.216685 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 06:38:57.223216 kernel: audit: type=1130 audit(1768891137.217:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:57.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:57.229920 coreos-metadata[888]: Jan 20 06:38:57.229 INFO Fetch successful Jan 20 06:38:57.239359 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 20 06:38:57.239498 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 20 06:38:57.245962 kernel: audit: type=1130 audit(1768891137.240:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:57.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:57.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:57.419451 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 06:38:57.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:57.423490 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 06:38:57.427067 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 06:38:57.447908 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 06:38:57.449209 kernel: BTRFS info (device vda6): last unmount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:38:57.471337 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 06:38:57.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:57.495706 ignition[992]: INFO : Ignition 2.24.0 Jan 20 06:38:57.496657 ignition[992]: INFO : Stage: mount Jan 20 06:38:57.497395 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:38:57.498992 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 20 06:38:57.500186 ignition[992]: INFO : mount: mount passed Jan 20 06:38:57.500186 ignition[992]: INFO : Ignition finished successfully Jan 20 06:38:57.500842 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 06:38:57.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:57.504007 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 06:38:57.531258 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 06:38:57.558841 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1003) Jan 20 06:38:57.559673 kernel: BTRFS info (device vda6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:38:57.562331 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:38:57.566068 kernel: BTRFS info (device vda6): turning on async discard Jan 20 06:38:57.566185 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 06:38:57.569985 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 06:38:57.628876 ignition[1019]: INFO : Ignition 2.24.0 Jan 20 06:38:57.630895 ignition[1019]: INFO : Stage: files Jan 20 06:38:57.630895 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:38:57.630895 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 20 06:38:57.633297 ignition[1019]: DEBUG : files: compiled without relabeling support, skipping Jan 20 06:38:57.634966 ignition[1019]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 06:38:57.634966 ignition[1019]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 06:38:57.639733 ignition[1019]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 06:38:57.640820 ignition[1019]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 06:38:57.642263 unknown[1019]: wrote ssh authorized keys file for user: core Jan 20 06:38:57.643159 ignition[1019]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 06:38:57.647225 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 06:38:57.647225 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 06:38:57.753326 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 06:38:57.815398 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 06:38:57.816454 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 06:38:57.816454 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 06:38:57.816454 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 06:38:57.816454 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 06:38:57.816454 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 06:38:57.816454 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 06:38:57.816454 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 06:38:57.821964 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 06:38:57.821964 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 06:38:57.821964 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 06:38:57.821964 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:38:57.821964 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:38:57.821964 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:38:57.821964 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 06:38:58.214864 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 06:38:58.822855 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:38:58.824732 ignition[1019]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 06:38:58.826327 ignition[1019]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 06:38:58.828865 ignition[1019]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 06:38:58.828865 ignition[1019]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 06:38:58.828865 ignition[1019]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 06:38:58.828865 ignition[1019]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 06:38:58.828865 ignition[1019]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 06:38:58.828865 ignition[1019]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 06:38:58.828865 ignition[1019]: INFO : files: files passed Jan 20 06:38:58.828865 ignition[1019]: INFO : Ignition finished successfully Jan 20 06:38:58.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:58.833280 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 06:38:58.837001 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 06:38:58.839981 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 06:38:58.852879 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 06:38:58.853024 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 06:38:58.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:58.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:58.873118 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:38:58.874913 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:38:58.876252 initrd-setup-root-after-ignition[1051]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:38:58.878271 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 06:38:58.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:58.879606 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 06:38:58.882222 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 06:38:58.950347 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 06:38:58.950585 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 06:38:58.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:58.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:58.952495 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 06:38:58.953109 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 06:38:58.954316 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 06:38:58.955552 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 06:38:58.984847 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 06:38:58.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:58.987627 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 06:38:59.016315 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 06:38:59.016682 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:38:59.018262 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:38:59.019794 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 06:38:59.021076 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 06:38:59.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.021330 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 06:38:59.022988 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 06:38:59.023891 systemd[1]: Stopped target basic.target - Basic System. Jan 20 06:38:59.025319 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 06:38:59.026661 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 06:38:59.028140 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 06:38:59.029450 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 06:38:59.030983 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 06:38:59.032243 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 06:38:59.033652 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 06:38:59.035051 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 06:38:59.036428 systemd[1]: Stopped target swap.target - Swaps. Jan 20 06:38:59.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.037602 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 06:38:59.037904 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 06:38:59.039049 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:38:59.039707 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:38:59.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.046283 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 06:38:59.046652 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:38:59.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.047925 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 06:38:59.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.048240 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 06:38:59.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.049643 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 06:38:59.050014 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 06:38:59.051552 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 06:38:59.051680 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 06:38:59.052871 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 20 06:38:59.053096 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 06:38:59.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.057120 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 06:38:59.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.061133 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 06:38:59.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.061649 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 06:38:59.061901 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:38:59.063045 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 06:38:59.063172 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:38:59.065198 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 06:38:59.065370 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 06:38:59.080149 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 06:38:59.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.081272 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 06:38:59.105165 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 06:38:59.109849 ignition[1075]: INFO : Ignition 2.24.0 Jan 20 06:38:59.109849 ignition[1075]: INFO : Stage: umount Jan 20 06:38:59.109849 ignition[1075]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:38:59.109849 ignition[1075]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 20 06:38:59.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.114285 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 06:38:59.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.117287 ignition[1075]: INFO : umount: umount passed Jan 20 06:38:59.117287 ignition[1075]: INFO : Ignition finished successfully Jan 20 06:38:59.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.114433 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 06:38:59.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.115936 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 06:38:59.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.116061 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 06:38:59.117750 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 06:38:59.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.117882 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 06:38:59.118759 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 06:38:59.118861 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 06:38:59.119690 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 06:38:59.119766 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 06:38:59.120720 systemd[1]: Stopped target network.target - Network. Jan 20 06:38:59.121843 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 06:38:59.121933 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 06:38:59.123083 systemd[1]: Stopped target paths.target - Path Units. Jan 20 06:38:59.123971 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 06:38:59.128119 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:38:59.128838 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 06:38:59.129653 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 06:38:59.130672 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 06:38:59.130731 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 06:38:59.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.131668 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 06:38:59.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.131708 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 06:38:59.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.132458 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 06:38:59.132484 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:38:59.133437 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 06:38:59.133514 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 06:38:59.134449 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 06:38:59.134510 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 06:38:59.135347 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 06:38:59.135409 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 06:38:59.136384 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 06:38:59.137459 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 06:38:59.147352 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 06:38:59.147593 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 06:38:59.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.150000 audit: BPF prog-id=6 op=UNLOAD Jan 20 06:38:59.153619 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 06:38:59.153855 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 06:38:59.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.157516 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 06:38:59.157000 audit: BPF prog-id=9 op=UNLOAD Jan 20 06:38:59.158520 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 06:38:59.158586 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:38:59.160748 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 06:38:59.162498 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 06:38:59.162613 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 06:38:59.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.165137 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 06:38:59.165234 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:38:59.165942 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 06:38:59.166013 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 06:38:59.166896 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:38:59.187467 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 06:38:59.187745 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:38:59.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.189491 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 06:38:59.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.189572 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 06:38:59.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.190288 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 06:38:59.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.190345 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:38:59.191075 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 06:38:59.191140 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 06:38:59.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.193382 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 06:38:59.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.193502 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 06:38:59.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.195112 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 06:38:59.195215 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 06:38:59.197212 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 06:38:59.198190 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 06:38:59.198267 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:38:59.202425 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 06:38:59.202543 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:38:59.203340 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 06:38:59.203412 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 06:38:59.204172 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 06:38:59.204247 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:38:59.205418 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:38:59.205504 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:38:59.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.230239 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 06:38:59.230470 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 06:38:59.234844 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 06:38:59.235385 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 06:38:59.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:59.236567 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 06:38:59.238184 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 06:38:59.268190 systemd[1]: Switching root. Jan 20 06:38:59.307223 systemd-journald[298]: Journal stopped Jan 20 06:39:01.131232 systemd-journald[298]: Received SIGTERM from PID 1 (systemd). Jan 20 06:39:01.131361 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 06:39:01.131397 kernel: SELinux: policy capability open_perms=1 Jan 20 06:39:01.131423 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 06:39:01.131447 kernel: SELinux: policy capability always_check_network=0 Jan 20 06:39:01.131490 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 06:39:01.131510 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 06:39:01.131528 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 06:39:01.131547 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 06:39:01.131568 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 06:39:01.131592 systemd[1]: Successfully loaded SELinux policy in 83.080ms. Jan 20 06:39:01.131647 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.066ms. Jan 20 06:39:01.131671 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 06:39:01.131692 systemd[1]: Detected virtualization kvm. Jan 20 06:39:01.131714 systemd[1]: Detected architecture x86-64. Jan 20 06:39:01.131736 systemd[1]: Detected first boot. Jan 20 06:39:01.131758 systemd[1]: Hostname set to . Jan 20 06:39:01.131780 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 06:39:01.131836 zram_generator::config[1119]: No configuration found. Jan 20 06:39:01.131869 kernel: Guest personality initialized and is inactive Jan 20 06:39:01.131887 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 06:39:01.131911 kernel: Initialized host personality Jan 20 06:39:01.131931 kernel: NET: Registered PF_VSOCK protocol family Jan 20 06:39:01.131952 systemd[1]: Populated /etc with preset unit settings. Jan 20 06:39:01.131973 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 06:39:01.132004 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 06:39:01.132024 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 06:39:01.132053 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 06:39:01.132074 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 06:39:01.132092 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 06:39:01.132113 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 06:39:01.132138 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 06:39:01.132165 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 06:39:01.132190 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 06:39:01.132210 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 06:39:01.132231 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:39:01.132251 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:39:01.132272 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 06:39:01.132293 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 06:39:01.132323 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 06:39:01.132344 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 06:39:01.132363 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 06:39:01.132382 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:39:01.132402 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:39:01.132431 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 06:39:01.132451 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 06:39:01.132471 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 06:39:01.132491 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 06:39:01.132510 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:39:01.132529 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 06:39:01.132549 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 06:39:01.132577 systemd[1]: Reached target slices.target - Slice Units. Jan 20 06:39:01.132598 systemd[1]: Reached target swap.target - Swaps. Jan 20 06:39:01.132617 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 06:39:01.132637 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 06:39:01.132656 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 06:39:01.132675 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:39:01.132695 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 06:39:01.132724 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:39:01.132743 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 06:39:01.132764 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 06:39:01.132783 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 06:39:01.137886 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:39:01.137939 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 06:39:01.137961 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 06:39:01.138075 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 06:39:01.138096 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 06:39:01.138115 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:39:01.138137 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 06:39:01.138156 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 06:39:01.138176 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 06:39:01.138196 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 06:39:01.138225 systemd[1]: Reached target machines.target - Containers. Jan 20 06:39:01.138246 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 06:39:01.138265 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:39:01.138284 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 06:39:01.138304 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 06:39:01.138323 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:39:01.138355 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 06:39:01.138384 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:39:01.138402 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 06:39:01.138421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:39:01.138442 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 06:39:01.138463 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 06:39:01.138483 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 06:39:01.138511 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 06:39:01.138531 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 06:39:01.138552 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:39:01.138573 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 06:39:01.138601 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 06:39:01.138623 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 06:39:01.138645 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 06:39:01.138666 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 06:39:01.138707 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 06:39:01.138746 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:39:01.138767 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 06:39:01.138786 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 06:39:01.138830 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 06:39:01.138861 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 06:39:01.138880 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 06:39:01.138899 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 06:39:01.138918 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:39:01.138938 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 06:39:01.138957 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 06:39:01.138978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:39:01.139006 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:39:01.139026 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:39:01.139045 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:39:01.139064 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:39:01.139083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:39:01.139102 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 06:39:01.139130 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 06:39:01.139151 kernel: fuse: init (API version 7.41) Jan 20 06:39:01.139172 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 06:39:01.139191 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 06:39:01.139211 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 06:39:01.139230 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 06:39:01.139249 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 06:39:01.139279 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 06:39:01.139306 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 06:39:01.139327 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 06:39:01.139348 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 06:39:01.139369 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 06:39:01.139389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:39:01.139410 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:39:01.139430 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 06:39:01.139458 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 06:39:01.139479 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 06:39:01.139501 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 06:39:01.139523 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:39:01.139544 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 06:39:01.139565 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 06:39:01.139584 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 06:39:01.139612 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 06:39:01.139632 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 06:39:01.139653 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 06:39:01.139672 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:39:01.139701 kernel: ACPI: bus type drm_connector registered Jan 20 06:39:01.139722 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 06:39:01.139743 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 06:39:01.139778 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 06:39:01.140899 systemd-journald[1192]: Collecting audit messages is enabled. Jan 20 06:39:01.140962 kernel: loop1: detected capacity change from 0 to 224512 Jan 20 06:39:01.140984 systemd-journald[1192]: Journal started Jan 20 06:39:01.141021 systemd-journald[1192]: Runtime Journal (/run/log/journal/504741c20b604c84ac3edb3be3b3a4d1) is 4.8M, max 39.1M, 34.2M free. Jan 20 06:39:00.487000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 06:39:00.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.685000 audit: BPF prog-id=14 op=UNLOAD Jan 20 06:39:00.685000 audit: BPF prog-id=13 op=UNLOAD Jan 20 06:39:00.687000 audit: BPF prog-id=15 op=LOAD Jan 20 06:39:00.692000 audit: BPF prog-id=16 op=LOAD Jan 20 06:39:00.696000 audit: BPF prog-id=17 op=LOAD Jan 20 06:39:00.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.124000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 06:39:01.124000 audit[1192]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe7c2dbff0 a2=4000 a3=0 items=0 ppid=1 pid=1192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:01.124000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 06:39:00.378166 systemd[1]: Queued start job for default target multi-user.target. Jan 20 06:39:01.145042 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 06:39:01.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:00.394222 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 06:39:00.395135 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 06:39:01.152324 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 06:39:01.168993 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 06:39:01.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.179286 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 20 06:39:01.179324 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 20 06:39:01.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.198125 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 06:39:01.220123 systemd-journald[1192]: Time spent on flushing to /var/log/journal/504741c20b604c84ac3edb3be3b3a4d1 is 84.259ms for 1149 entries. Jan 20 06:39:01.220123 systemd-journald[1192]: System Journal (/var/log/journal/504741c20b604c84ac3edb3be3b3a4d1) is 8M, max 163.5M, 155.5M free. Jan 20 06:39:01.320019 systemd-journald[1192]: Received client request to flush runtime journal. Jan 20 06:39:01.320112 kernel: loop2: detected capacity change from 0 to 111560 Jan 20 06:39:01.320175 kernel: loop3: detected capacity change from 0 to 50784 Jan 20 06:39:01.227831 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 06:39:01.324856 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 06:39:01.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.328697 kernel: kauditd_printk_skb: 100 callbacks suppressed Jan 20 06:39:01.328861 kernel: audit: type=1130 audit(1768891141.325:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.357200 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 06:39:01.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.363031 kernel: audit: type=1130 audit(1768891141.357:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.361000 audit: BPF prog-id=18 op=LOAD Jan 20 06:39:01.361000 audit: BPF prog-id=19 op=LOAD Jan 20 06:39:01.365474 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 06:39:01.370480 kernel: audit: type=1334 audit(1768891141.361:140): prog-id=18 op=LOAD Jan 20 06:39:01.370583 kernel: audit: type=1334 audit(1768891141.361:141): prog-id=19 op=LOAD Jan 20 06:39:01.370601 kernel: audit: type=1334 audit(1768891141.361:142): prog-id=20 op=LOAD Jan 20 06:39:01.361000 audit: BPF prog-id=20 op=LOAD Jan 20 06:39:01.370000 audit: BPF prog-id=21 op=LOAD Jan 20 06:39:01.373922 kernel: audit: type=1334 audit(1768891141.370:143): prog-id=21 op=LOAD Jan 20 06:39:01.374746 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 06:39:01.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.381000 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 06:39:01.383017 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:39:01.389854 kernel: audit: type=1130 audit(1768891141.383:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.409867 kernel: loop4: detected capacity change from 0 to 8 Jan 20 06:39:01.400313 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 06:39:01.407076 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 06:39:01.419143 kernel: audit: type=1334 audit(1768891141.413:145): prog-id=22 op=LOAD Jan 20 06:39:01.419301 kernel: audit: type=1334 audit(1768891141.417:146): prog-id=23 op=LOAD Jan 20 06:39:01.413000 audit: BPF prog-id=22 op=LOAD Jan 20 06:39:01.417000 audit: BPF prog-id=23 op=LOAD Jan 20 06:39:01.421191 kernel: audit: type=1334 audit(1768891141.417:147): prog-id=24 op=LOAD Jan 20 06:39:01.417000 audit: BPF prog-id=24 op=LOAD Jan 20 06:39:01.422365 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 06:39:01.425000 audit: BPF prog-id=25 op=LOAD Jan 20 06:39:01.430000 audit: BPF prog-id=26 op=LOAD Jan 20 06:39:01.430000 audit: BPF prog-id=27 op=LOAD Jan 20 06:39:01.437346 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 06:39:01.443312 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 06:39:01.477858 kernel: loop5: detected capacity change from 0 to 224512 Jan 20 06:39:01.506882 kernel: loop6: detected capacity change from 0 to 111560 Jan 20 06:39:01.532123 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jan 20 06:39:01.534914 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jan 20 06:39:01.541510 kernel: loop7: detected capacity change from 0 to 50784 Jan 20 06:39:01.559263 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:39:01.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:01.573856 kernel: loop1: detected capacity change from 0 to 8 Jan 20 06:39:01.576968 (sd-merge)[1276]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Jan 20 06:39:01.588177 systemd-nsresourced[1271]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 06:39:01.592585 (sd-merge)[1276]: Merged extensions into '/usr'. Jan 20 06:39:01.605082 systemd[1]: Reload requested from client PID 1221 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 06:39:01.605115 systemd[1]: Reloading... Jan 20 06:39:01.865861 zram_generator::config[1323]: No configuration found. Jan 20 06:39:02.015239 systemd-oomd[1266]: No swap; memory pressure usage will be degraded Jan 20 06:39:02.024224 systemd-resolved[1267]: Positive Trust Anchors: Jan 20 06:39:02.024789 systemd-resolved[1267]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 06:39:02.024834 systemd-resolved[1267]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 06:39:02.024896 systemd-resolved[1267]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 06:39:02.048177 systemd-resolved[1267]: Using system hostname 'ci-4585.0.0-n-f46ee37080'. Jan 20 06:39:02.237962 systemd[1]: Reloading finished in 632 ms. Jan 20 06:39:02.255354 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 06:39:02.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:02.257457 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 06:39:02.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:02.258829 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 06:39:02.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:02.259644 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 06:39:02.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:02.260779 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 06:39:02.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:02.265526 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:39:02.273065 systemd[1]: Starting ensure-sysext.service... Jan 20 06:39:02.278122 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 06:39:02.279000 audit: BPF prog-id=28 op=LOAD Jan 20 06:39:02.280000 audit: BPF prog-id=18 op=UNLOAD Jan 20 06:39:02.280000 audit: BPF prog-id=29 op=LOAD Jan 20 06:39:02.280000 audit: BPF prog-id=30 op=LOAD Jan 20 06:39:02.280000 audit: BPF prog-id=19 op=UNLOAD Jan 20 06:39:02.280000 audit: BPF prog-id=20 op=UNLOAD Jan 20 06:39:02.284000 audit: BPF prog-id=31 op=LOAD Jan 20 06:39:02.286000 audit: BPF prog-id=21 op=UNLOAD Jan 20 06:39:02.288000 audit: BPF prog-id=32 op=LOAD Jan 20 06:39:02.288000 audit: BPF prog-id=15 op=UNLOAD Jan 20 06:39:02.288000 audit: BPF prog-id=33 op=LOAD Jan 20 06:39:02.288000 audit: BPF prog-id=34 op=LOAD Jan 20 06:39:02.288000 audit: BPF prog-id=16 op=UNLOAD Jan 20 06:39:02.288000 audit: BPF prog-id=17 op=UNLOAD Jan 20 06:39:02.289000 audit: BPF prog-id=35 op=LOAD Jan 20 06:39:02.291000 audit: BPF prog-id=22 op=UNLOAD Jan 20 06:39:02.291000 audit: BPF prog-id=36 op=LOAD Jan 20 06:39:02.291000 audit: BPF prog-id=37 op=LOAD Jan 20 06:39:02.291000 audit: BPF prog-id=23 op=UNLOAD Jan 20 06:39:02.291000 audit: BPF prog-id=24 op=UNLOAD Jan 20 06:39:02.292000 audit: BPF prog-id=38 op=LOAD Jan 20 06:39:02.292000 audit: BPF prog-id=25 op=UNLOAD Jan 20 06:39:02.292000 audit: BPF prog-id=39 op=LOAD Jan 20 06:39:02.292000 audit: BPF prog-id=40 op=LOAD Jan 20 06:39:02.292000 audit: BPF prog-id=26 op=UNLOAD Jan 20 06:39:02.292000 audit: BPF prog-id=27 op=UNLOAD Jan 20 06:39:02.330171 systemd[1]: Reload requested from client PID 1362 ('systemctl') (unit ensure-sysext.service)... Jan 20 06:39:02.330494 systemd[1]: Reloading... Jan 20 06:39:02.375954 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 06:39:02.376018 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 06:39:02.377243 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 06:39:02.381935 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Jan 20 06:39:02.383112 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Jan 20 06:39:02.394246 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 06:39:02.396021 systemd-tmpfiles[1363]: Skipping /boot Jan 20 06:39:02.428112 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 06:39:02.429872 systemd-tmpfiles[1363]: Skipping /boot Jan 20 06:39:02.528917 zram_generator::config[1397]: No configuration found. Jan 20 06:39:02.799069 systemd[1]: Reloading finished in 467 ms. Jan 20 06:39:02.829895 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 06:39:02.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:02.832000 audit: BPF prog-id=41 op=LOAD Jan 20 06:39:02.832000 audit: BPF prog-id=31 op=UNLOAD Jan 20 06:39:02.835000 audit: BPF prog-id=42 op=LOAD Jan 20 06:39:02.835000 audit: BPF prog-id=28 op=UNLOAD Jan 20 06:39:02.835000 audit: BPF prog-id=43 op=LOAD Jan 20 06:39:02.835000 audit: BPF prog-id=44 op=LOAD Jan 20 06:39:02.835000 audit: BPF prog-id=29 op=UNLOAD Jan 20 06:39:02.835000 audit: BPF prog-id=30 op=UNLOAD Jan 20 06:39:02.836000 audit: BPF prog-id=45 op=LOAD Jan 20 06:39:02.837000 audit: BPF prog-id=32 op=UNLOAD Jan 20 06:39:02.837000 audit: BPF prog-id=46 op=LOAD Jan 20 06:39:02.837000 audit: BPF prog-id=47 op=LOAD Jan 20 06:39:02.837000 audit: BPF prog-id=33 op=UNLOAD Jan 20 06:39:02.837000 audit: BPF prog-id=34 op=UNLOAD Jan 20 06:39:02.838000 audit: BPF prog-id=48 op=LOAD Jan 20 06:39:02.838000 audit: BPF prog-id=38 op=UNLOAD Jan 20 06:39:02.838000 audit: BPF prog-id=49 op=LOAD Jan 20 06:39:02.838000 audit: BPF prog-id=50 op=LOAD Jan 20 06:39:02.838000 audit: BPF prog-id=39 op=UNLOAD Jan 20 06:39:02.838000 audit: BPF prog-id=40 op=UNLOAD Jan 20 06:39:02.840000 audit: BPF prog-id=51 op=LOAD Jan 20 06:39:02.840000 audit: BPF prog-id=35 op=UNLOAD Jan 20 06:39:02.840000 audit: BPF prog-id=52 op=LOAD Jan 20 06:39:02.840000 audit: BPF prog-id=53 op=LOAD Jan 20 06:39:02.840000 audit: BPF prog-id=36 op=UNLOAD Jan 20 06:39:02.840000 audit: BPF prog-id=37 op=UNLOAD Jan 20 06:39:02.848766 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:39:02.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:02.865291 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 06:39:02.868125 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 06:39:02.882662 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 06:39:02.886549 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 06:39:02.888000 audit: BPF prog-id=8 op=UNLOAD Jan 20 06:39:02.888000 audit: BPF prog-id=7 op=UNLOAD Jan 20 06:39:02.889000 audit: BPF prog-id=54 op=LOAD Jan 20 06:39:02.889000 audit: BPF prog-id=55 op=LOAD Jan 20 06:39:02.892037 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:39:02.895099 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 06:39:02.900857 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:39:02.901093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:39:02.906564 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:39:02.917347 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:39:02.938527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:39:02.939391 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:39:02.939695 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:39:02.940337 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:39:02.940496 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:39:02.944458 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:39:02.944688 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:39:02.944926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:39:02.945130 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:39:02.945243 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:39:02.945338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:39:02.958707 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:39:02.960089 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:39:03.001000 audit[1445]: SYSTEM_BOOT pid=1445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.000865 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 06:39:03.004671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:39:03.005174 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:39:03.005380 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:39:03.005598 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:39:03.010482 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:39:03.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.010960 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:39:03.012728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:39:03.013145 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:39:03.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.015293 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:39:03.015711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:39:03.017753 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 06:39:03.018417 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 06:39:03.036704 systemd[1]: Finished ensure-sysext.service. Jan 20 06:39:03.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.042737 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 06:39:03.044075 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 06:39:03.044000 audit: BPF prog-id=56 op=LOAD Jan 20 06:39:03.049139 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 06:39:03.050618 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 06:39:03.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.092940 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 06:39:03.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.099969 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 06:39:03.114290 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 06:39:03.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:03.141799 systemd-udevd[1444]: Using default interface naming scheme 'v257'. Jan 20 06:39:03.149000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 06:39:03.149000 audit[1480]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8f4436c0 a2=420 a3=0 items=0 ppid=1440 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:03.149000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:39:03.151104 augenrules[1480]: No rules Jan 20 06:39:03.153515 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 06:39:03.154644 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 06:39:03.190949 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 06:39:03.191753 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 06:39:03.200339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:39:03.215769 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 06:39:03.408584 systemd-networkd[1494]: lo: Link UP Jan 20 06:39:03.408595 systemd-networkd[1494]: lo: Gained carrier Jan 20 06:39:03.421371 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 06:39:03.428465 systemd[1]: Reached target network.target - Network. Jan 20 06:39:03.434519 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 06:39:03.439466 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 06:39:03.469678 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 06:39:03.479248 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jan 20 06:39:03.481448 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 20 06:39:03.484217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:39:03.484422 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:39:03.492049 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:39:03.502275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:39:03.510299 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:39:03.511727 systemd-networkd[1494]: eth0: Configuring with /run/systemd/network/10-4e:96:f2:37:93:96.network. Jan 20 06:39:03.512069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:39:03.512262 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:39:03.512306 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:39:03.512345 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 06:39:03.512378 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:39:03.521920 systemd-networkd[1494]: eth0: Link UP Jan 20 06:39:03.522139 systemd-networkd[1494]: eth0: Gained carrier Jan 20 06:39:03.542637 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 06:39:03.547960 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 20 06:39:03.567887 kernel: ISO 9660 Extensions: RRIP_1991A Jan 20 06:39:03.576473 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 20 06:39:03.587634 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:39:03.588675 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:39:03.590465 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:39:03.591139 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:39:03.607719 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 06:39:03.620262 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:39:03.621130 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:39:03.624771 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 06:39:03.666146 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 06:39:03.673078 systemd-networkd[1494]: eth1: Configuring with /run/systemd/network/10-a6:25:29:26:df:b7.network. Jan 20 06:39:03.674030 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 06:39:03.674029 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 20 06:39:03.674745 systemd-networkd[1494]: eth1: Link UP Jan 20 06:39:03.676924 systemd-networkd[1494]: eth1: Gained carrier Jan 20 06:39:03.677239 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 20 06:39:03.681396 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 20 06:39:03.684102 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 20 06:39:03.731700 kernel: ACPI: button: Power Button [PWRF] Jan 20 06:39:03.770152 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 20 06:39:03.770650 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 06:39:03.807370 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 06:39:03.812009 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 06:39:03.865786 ldconfig[1442]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 06:39:03.867256 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 06:39:03.876570 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 06:39:03.885269 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 06:39:03.896385 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 20 06:39:03.896499 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 20 06:39:03.906834 kernel: Console: switching to colour dummy device 80x25 Jan 20 06:39:03.906935 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 20 06:39:03.906953 kernel: [drm] features: -context_init Jan 20 06:39:03.910870 kernel: [drm] number of scanouts: 1 Jan 20 06:39:03.910993 kernel: [drm] number of cap sets: 0 Jan 20 06:39:03.913842 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jan 20 06:39:03.923959 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 20 06:39:03.924069 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 06:39:03.932838 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 20 06:39:03.942025 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 06:39:03.944264 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 06:39:03.944561 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 06:39:03.944677 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 06:39:03.944764 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 06:39:03.948114 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 06:39:03.948435 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 06:39:03.948559 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 06:39:03.948773 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 06:39:03.951099 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 06:39:03.951217 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 06:39:03.951263 systemd[1]: Reached target paths.target - Path Units. Jan 20 06:39:03.951355 systemd[1]: Reached target timers.target - Timer Units. Jan 20 06:39:03.953842 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 06:39:03.963085 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 06:39:03.970626 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 06:39:03.972243 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 06:39:03.972434 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 06:39:03.978562 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 06:39:03.980460 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 06:39:03.983009 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 06:39:03.986168 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 06:39:03.988687 systemd[1]: Reached target basic.target - Basic System. Jan 20 06:39:03.989366 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 06:39:03.989431 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 06:39:03.996009 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 06:39:04.001432 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 06:39:04.009175 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 06:39:04.015445 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 06:39:04.024248 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 06:39:04.035248 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 06:39:04.037036 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 06:39:04.040767 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 06:39:04.049319 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 06:39:04.061094 jq[1555]: false Jan 20 06:39:04.065089 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 06:39:04.073165 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 06:39:04.082006 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 06:39:04.098466 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 06:39:04.101324 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 06:39:04.104306 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 06:39:04.106740 extend-filesystems[1556]: Found /dev/vda6 Jan 20 06:39:04.111058 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 06:39:04.118015 extend-filesystems[1556]: Found /dev/vda9 Jan 20 06:39:04.131227 extend-filesystems[1556]: Checking size of /dev/vda9 Jan 20 06:39:04.121052 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 06:39:04.135934 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 06:39:04.139281 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 06:39:04.140557 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 06:39:04.141124 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 06:39:04.141889 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 06:39:04.172861 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Refreshing passwd entry cache Jan 20 06:39:04.171375 oslogin_cache_refresh[1557]: Refreshing passwd entry cache Jan 20 06:39:04.179823 extend-filesystems[1556]: Resized partition /dev/vda9 Jan 20 06:39:04.184855 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Failure getting users, quitting Jan 20 06:39:04.184855 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 06:39:04.184855 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Refreshing group entry cache Jan 20 06:39:04.184099 oslogin_cache_refresh[1557]: Failure getting users, quitting Jan 20 06:39:04.184128 oslogin_cache_refresh[1557]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 06:39:04.184205 oslogin_cache_refresh[1557]: Refreshing group entry cache Jan 20 06:39:04.188606 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Failure getting groups, quitting Jan 20 06:39:04.188606 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 06:39:04.187391 oslogin_cache_refresh[1557]: Failure getting groups, quitting Jan 20 06:39:04.190911 extend-filesystems[1583]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 06:39:04.187410 oslogin_cache_refresh[1557]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 06:39:04.195665 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 06:39:04.196045 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 06:39:04.198952 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Jan 20 06:39:04.253213 jq[1568]: true Jan 20 06:39:04.267786 coreos-metadata[1552]: Jan 20 06:39:04.267 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 20 06:39:04.289013 coreos-metadata[1552]: Jan 20 06:39:04.283 INFO Fetch successful Jan 20 06:39:04.288777 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 06:39:04.288282 dbus-daemon[1553]: [system] SELinux support is enabled Jan 20 06:39:04.298833 jq[1596]: true Jan 20 06:39:04.298871 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 06:39:04.298918 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 06:39:04.301417 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 06:39:04.301570 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 20 06:39:04.301847 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 06:39:04.315593 update_engine[1566]: I20260120 06:39:04.313140 1566 main.cc:92] Flatcar Update Engine starting Jan 20 06:39:04.323990 tar[1585]: linux-amd64/LICENSE Jan 20 06:39:04.323990 tar[1585]: linux-amd64/helm Jan 20 06:39:04.368160 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 06:39:04.368605 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 06:39:04.387678 systemd[1]: Started update-engine.service - Update Engine. Jan 20 06:39:04.400988 update_engine[1566]: I20260120 06:39:04.397153 1566 update_check_scheduler.cc:74] Next update check in 5m13s Jan 20 06:39:04.467841 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Jan 20 06:39:04.484201 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 06:39:04.488333 extend-filesystems[1583]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 06:39:04.488333 extend-filesystems[1583]: old_desc_blocks = 1, new_desc_blocks = 7 Jan 20 06:39:04.488333 extend-filesystems[1583]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Jan 20 06:39:04.528456 extend-filesystems[1556]: Resized filesystem in /dev/vda9 Jan 20 06:39:04.490052 systemd-logind[1565]: New seat seat0. Jan 20 06:39:04.501242 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 06:39:04.501694 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 06:39:04.524240 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 06:39:04.548968 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:39:04.604564 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 06:39:04.609240 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 06:39:04.661071 systemd-networkd[1494]: eth0: Gained IPv6LL Jan 20 06:39:04.661854 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 20 06:39:04.665610 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 06:39:04.675532 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 06:39:04.694472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:39:04.703679 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 06:39:04.728856 bash[1638]: Updated "/home/core/.ssh/authorized_keys" Jan 20 06:39:04.737683 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 06:39:04.749744 systemd[1]: Starting sshkeys.service... Jan 20 06:39:04.777688 sshd_keygen[1593]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 06:39:04.806794 systemd-logind[1565]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 06:39:04.834395 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:39:04.843074 systemd-logind[1565]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 06:39:04.991087 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 20 06:39:05.009220 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 20 06:39:05.031672 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 06:39:05.073844 containerd[1591]: time="2026-01-20T06:39:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 06:39:05.074359 containerd[1591]: time="2026-01-20T06:39:05.074190605Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 06:39:05.114625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:39:05.115041 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:39:05.116649 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:39:05.135333 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:39:05.171621 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 06:39:05.192134 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 06:39:05.214169 containerd[1591]: time="2026-01-20T06:39:05.214082899Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.769µs" Jan 20 06:39:05.214169 containerd[1591]: time="2026-01-20T06:39:05.214148425Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 06:39:05.214382 containerd[1591]: time="2026-01-20T06:39:05.214214221Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 06:39:05.214382 containerd[1591]: time="2026-01-20T06:39:05.214230743Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 06:39:05.215441 containerd[1591]: time="2026-01-20T06:39:05.214532745Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 06:39:05.215441 containerd[1591]: time="2026-01-20T06:39:05.214572242Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 06:39:05.215441 containerd[1591]: time="2026-01-20T06:39:05.214635267Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 06:39:05.215441 containerd[1591]: time="2026-01-20T06:39:05.214647623Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 06:39:05.221624 containerd[1591]: time="2026-01-20T06:39:05.221545356Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 06:39:05.221624 containerd[1591]: time="2026-01-20T06:39:05.221587958Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 06:39:05.221624 containerd[1591]: time="2026-01-20T06:39:05.221605759Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 06:39:05.221624 containerd[1591]: time="2026-01-20T06:39:05.221614876Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 06:39:05.228105 containerd[1591]: time="2026-01-20T06:39:05.228037516Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 06:39:05.228105 containerd[1591]: time="2026-01-20T06:39:05.228074989Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 06:39:05.228268 containerd[1591]: time="2026-01-20T06:39:05.228246883Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 06:39:05.228543 containerd[1591]: time="2026-01-20T06:39:05.228501684Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 06:39:05.228578 containerd[1591]: time="2026-01-20T06:39:05.228543093Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 06:39:05.228578 containerd[1591]: time="2026-01-20T06:39:05.228554943Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 06:39:05.228765 containerd[1591]: time="2026-01-20T06:39:05.228608406Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 06:39:05.232906 containerd[1591]: time="2026-01-20T06:39:05.232833502Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 06:39:05.233583 containerd[1591]: time="2026-01-20T06:39:05.233544129Z" level=info msg="metadata content store policy set" policy=shared Jan 20 06:39:05.237662 containerd[1591]: time="2026-01-20T06:39:05.237595916Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.238490896Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.238651705Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.238669157Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.238684573Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.238697389Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.238710810Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.238723664Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.238737642Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.238937558Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.238966497Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.238979323Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.238989737Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 06:39:05.239936 containerd[1591]: time="2026-01-20T06:39:05.239004565Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239195340Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239217900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239237024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239249155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239261046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239272908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239304014Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239316540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239327958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239340188Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239351207Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239412987Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239462331Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239475694Z" level=info msg="Start snapshots syncer" Jan 20 06:39:05.240409 containerd[1591]: time="2026-01-20T06:39:05.239500021Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 06:39:05.242129 containerd[1591]: time="2026-01-20T06:39:05.242045497Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.243942829Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244079435Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244258483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244285446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244298088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244310673Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244324127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244341338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244359243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244376991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244394047Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244432021Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244455734Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 06:39:05.244974 containerd[1591]: time="2026-01-20T06:39:05.244471387Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 06:39:05.244515 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:39:05.245529 containerd[1591]: time="2026-01-20T06:39:05.244487508Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 06:39:05.245529 containerd[1591]: time="2026-01-20T06:39:05.244501451Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 06:39:05.245529 containerd[1591]: time="2026-01-20T06:39:05.244518995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 06:39:05.245529 containerd[1591]: time="2026-01-20T06:39:05.244537406Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 06:39:05.245529 containerd[1591]: time="2026-01-20T06:39:05.244561425Z" level=info msg="runtime interface created" Jan 20 06:39:05.245529 containerd[1591]: time="2026-01-20T06:39:05.244570075Z" level=info msg="created NRI interface" Jan 20 06:39:05.245529 containerd[1591]: time="2026-01-20T06:39:05.244584135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 06:39:05.245529 containerd[1591]: time="2026-01-20T06:39:05.244612900Z" level=info msg="Connect containerd service" Jan 20 06:39:05.245529 containerd[1591]: time="2026-01-20T06:39:05.244648475Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 06:39:05.245386 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:39:05.246755 containerd[1591]: time="2026-01-20T06:39:05.246699887Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 06:39:05.265429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:39:05.286975 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 06:39:05.287867 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 06:39:05.300972 coreos-metadata[1660]: Jan 20 06:39:05.300 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 20 06:39:05.306841 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 06:39:05.308986 systemd-networkd[1494]: eth1: Gained IPv6LL Jan 20 06:39:05.311598 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 20 06:39:05.338602 coreos-metadata[1660]: Jan 20 06:39:05.338 INFO Fetch successful Jan 20 06:39:05.344221 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:39:05.355902 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 06:39:05.363787 unknown[1660]: wrote ssh authorized keys file for user: core Jan 20 06:39:05.372064 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 06:39:05.381063 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 06:39:05.385659 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 06:39:05.427459 update-ssh-keys[1691]: Updated "/home/core/.ssh/authorized_keys" Jan 20 06:39:05.428082 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 20 06:39:05.435209 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 06:39:05.435322 systemd[1]: Finished sshkeys.service. Jan 20 06:39:05.724924 containerd[1591]: time="2026-01-20T06:39:05.723490668Z" level=info msg="Start subscribing containerd event" Jan 20 06:39:05.724924 containerd[1591]: time="2026-01-20T06:39:05.723616607Z" level=info msg="Start recovering state" Jan 20 06:39:05.724924 containerd[1591]: time="2026-01-20T06:39:05.723988325Z" level=info msg="Start event monitor" Jan 20 06:39:05.724924 containerd[1591]: time="2026-01-20T06:39:05.724023482Z" level=info msg="Start cni network conf syncer for default" Jan 20 06:39:05.724924 containerd[1591]: time="2026-01-20T06:39:05.724036703Z" level=info msg="Start streaming server" Jan 20 06:39:05.724924 containerd[1591]: time="2026-01-20T06:39:05.724057891Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 06:39:05.724924 containerd[1591]: time="2026-01-20T06:39:05.724072744Z" level=info msg="runtime interface starting up..." Jan 20 06:39:05.724924 containerd[1591]: time="2026-01-20T06:39:05.724083464Z" level=info msg="starting plugins..." Jan 20 06:39:05.724924 containerd[1591]: time="2026-01-20T06:39:05.724111212Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 06:39:05.729430 kernel: EDAC MC: Ver: 3.0.0 Jan 20 06:39:05.729569 containerd[1591]: time="2026-01-20T06:39:05.726751189Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 06:39:05.729569 containerd[1591]: time="2026-01-20T06:39:05.729364410Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 06:39:05.731484 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 06:39:05.736787 containerd[1591]: time="2026-01-20T06:39:05.731566891Z" level=info msg="containerd successfully booted in 0.659088s" Jan 20 06:39:06.051919 tar[1585]: linux-amd64/README.md Jan 20 06:39:06.081055 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 06:39:06.214465 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 06:39:06.223385 systemd[1]: Started sshd@0-164.92.87.233:22-20.161.92.111:37074.service - OpenSSH per-connection server daemon (20.161.92.111:37074). Jan 20 06:39:06.623249 sshd[1711]: Accepted publickey for core from 20.161.92.111 port 37074 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:39:06.628295 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:39:06.645512 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 06:39:06.649624 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 06:39:06.666899 systemd-logind[1565]: New session 1 of user core. Jan 20 06:39:06.692577 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 06:39:06.700363 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 06:39:06.728519 (systemd)[1717]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:39:06.735587 systemd-logind[1565]: New session 2 of user core. Jan 20 06:39:06.839208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:39:06.841128 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 06:39:06.852737 (kubelet)[1729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:39:06.915697 systemd[1717]: Queued start job for default target default.target. Jan 20 06:39:06.923261 systemd[1717]: Created slice app.slice - User Application Slice. Jan 20 06:39:06.923543 systemd[1717]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 06:39:06.923564 systemd[1717]: Reached target paths.target - Paths. Jan 20 06:39:06.923631 systemd[1717]: Reached target timers.target - Timers. Jan 20 06:39:06.928007 systemd[1717]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 06:39:06.929607 systemd[1717]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 06:39:06.957677 systemd[1717]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 06:39:06.966607 systemd[1717]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 06:39:06.966821 systemd[1717]: Reached target sockets.target - Sockets. Jan 20 06:39:06.966913 systemd[1717]: Reached target basic.target - Basic System. Jan 20 06:39:06.966987 systemd[1717]: Reached target default.target - Main User Target. Jan 20 06:39:06.967042 systemd[1717]: Startup finished in 218ms. Jan 20 06:39:06.967694 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 06:39:06.975223 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 06:39:06.978974 systemd[1]: Startup finished in 2.808s (kernel) + 5.909s (initrd) + 7.559s (userspace) = 16.277s. Jan 20 06:39:07.192446 systemd[1]: Started sshd@1-164.92.87.233:22-20.161.92.111:37090.service - OpenSSH per-connection server daemon (20.161.92.111:37090). Jan 20 06:39:07.584882 sshd[1743]: Accepted publickey for core from 20.161.92.111 port 37090 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:39:07.587677 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:39:07.592604 kubelet[1729]: E0120 06:39:07.592433 1729 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:39:07.598407 systemd-logind[1565]: New session 3 of user core. Jan 20 06:39:07.599967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:39:07.600334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:39:07.601071 systemd[1]: kubelet.service: Consumed 1.367s CPU time, 265.9M memory peak. Jan 20 06:39:07.612298 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 06:39:07.791514 sshd[1751]: Connection closed by 20.161.92.111 port 37090 Jan 20 06:39:07.792906 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jan 20 06:39:07.800017 systemd[1]: sshd@1-164.92.87.233:22-20.161.92.111:37090.service: Deactivated successfully. Jan 20 06:39:07.803315 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 06:39:07.806561 systemd-logind[1565]: Session 3 logged out. Waiting for processes to exit. Jan 20 06:39:07.809471 systemd-logind[1565]: Removed session 3. Jan 20 06:39:07.875457 systemd[1]: Started sshd@2-164.92.87.233:22-20.161.92.111:37104.service - OpenSSH per-connection server daemon (20.161.92.111:37104). Jan 20 06:39:08.269950 sshd[1757]: Accepted publickey for core from 20.161.92.111 port 37104 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:39:08.271834 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:39:08.280365 systemd-logind[1565]: New session 4 of user core. Jan 20 06:39:08.285349 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 06:39:08.471839 sshd[1761]: Connection closed by 20.161.92.111 port 37104 Jan 20 06:39:08.471044 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jan 20 06:39:08.476899 systemd[1]: sshd@2-164.92.87.233:22-20.161.92.111:37104.service: Deactivated successfully. Jan 20 06:39:08.479785 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 06:39:08.483493 systemd-logind[1565]: Session 4 logged out. Waiting for processes to exit. Jan 20 06:39:08.485171 systemd-logind[1565]: Removed session 4. Jan 20 06:39:08.540248 systemd[1]: Started sshd@3-164.92.87.233:22-20.161.92.111:37118.service - OpenSSH per-connection server daemon (20.161.92.111:37118). Jan 20 06:39:08.899242 sshd[1767]: Accepted publickey for core from 20.161.92.111 port 37118 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:39:08.901415 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:39:08.909542 systemd-logind[1565]: New session 5 of user core. Jan 20 06:39:08.916188 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 06:39:09.088140 sshd[1771]: Connection closed by 20.161.92.111 port 37118 Jan 20 06:39:09.089072 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jan 20 06:39:09.095415 systemd[1]: sshd@3-164.92.87.233:22-20.161.92.111:37118.service: Deactivated successfully. Jan 20 06:39:09.097795 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 06:39:09.099127 systemd-logind[1565]: Session 5 logged out. Waiting for processes to exit. Jan 20 06:39:09.101329 systemd-logind[1565]: Removed session 5. Jan 20 06:39:09.162893 systemd[1]: Started sshd@4-164.92.87.233:22-20.161.92.111:37122.service - OpenSSH per-connection server daemon (20.161.92.111:37122). Jan 20 06:39:09.522889 sshd[1777]: Accepted publickey for core from 20.161.92.111 port 37122 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:39:09.523637 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:39:09.531057 systemd-logind[1565]: New session 6 of user core. Jan 20 06:39:09.541315 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 06:39:09.666026 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 06:39:09.667155 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:39:10.319869 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 06:39:10.342650 (dockerd)[1801]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 06:39:10.814334 dockerd[1801]: time="2026-01-20T06:39:10.813985023Z" level=info msg="Starting up" Jan 20 06:39:10.820085 dockerd[1801]: time="2026-01-20T06:39:10.820014815Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 06:39:10.845428 dockerd[1801]: time="2026-01-20T06:39:10.845341429Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 06:39:10.873607 systemd[1]: var-lib-docker-metacopy\x2dcheck4066427064-merged.mount: Deactivated successfully. Jan 20 06:39:10.892209 dockerd[1801]: time="2026-01-20T06:39:10.892117995Z" level=info msg="Loading containers: start." Jan 20 06:39:10.904872 kernel: Initializing XFRM netlink socket Jan 20 06:39:11.202933 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 20 06:39:11.204783 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 20 06:39:11.215963 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 20 06:39:11.256276 systemd-networkd[1494]: docker0: Link UP Jan 20 06:39:11.257232 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 20 06:39:11.259842 dockerd[1801]: time="2026-01-20T06:39:11.259749619Z" level=info msg="Loading containers: done." Jan 20 06:39:11.280953 dockerd[1801]: time="2026-01-20T06:39:11.280890440Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 06:39:11.281157 dockerd[1801]: time="2026-01-20T06:39:11.281017052Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 06:39:11.281157 dockerd[1801]: time="2026-01-20T06:39:11.281127211Z" level=info msg="Initializing buildkit" Jan 20 06:39:11.304161 dockerd[1801]: time="2026-01-20T06:39:11.304099636Z" level=info msg="Completed buildkit initialization" Jan 20 06:39:11.314855 dockerd[1801]: time="2026-01-20T06:39:11.314771980Z" level=info msg="Daemon has completed initialization" Jan 20 06:39:11.315901 dockerd[1801]: time="2026-01-20T06:39:11.314967340Z" level=info msg="API listen on /run/docker.sock" Jan 20 06:39:11.315158 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 06:39:12.156814 containerd[1591]: time="2026-01-20T06:39:12.156736484Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 06:39:12.851610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656411196.mount: Deactivated successfully. Jan 20 06:39:14.060492 containerd[1591]: time="2026-01-20T06:39:14.060292535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:14.061742 containerd[1591]: time="2026-01-20T06:39:14.061693866Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 20 06:39:14.062859 containerd[1591]: time="2026-01-20T06:39:14.062325503Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:14.068172 containerd[1591]: time="2026-01-20T06:39:14.066024057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:14.068172 containerd[1591]: time="2026-01-20T06:39:14.067488831Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.910696019s" Jan 20 06:39:14.068172 containerd[1591]: time="2026-01-20T06:39:14.067548223Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 06:39:14.068873 containerd[1591]: time="2026-01-20T06:39:14.068765280Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 06:39:16.045972 containerd[1591]: time="2026-01-20T06:39:16.045882182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:16.049022 containerd[1591]: time="2026-01-20T06:39:16.048915996Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 20 06:39:16.049598 containerd[1591]: time="2026-01-20T06:39:16.049550979Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:16.055266 containerd[1591]: time="2026-01-20T06:39:16.055130224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:16.056832 containerd[1591]: time="2026-01-20T06:39:16.056645343Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.987577917s" Jan 20 06:39:16.056832 containerd[1591]: time="2026-01-20T06:39:16.056718161Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 06:39:16.057830 containerd[1591]: time="2026-01-20T06:39:16.057770724Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 06:39:17.745832 containerd[1591]: time="2026-01-20T06:39:17.745555669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:17.746720 containerd[1591]: time="2026-01-20T06:39:17.746605828Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 20 06:39:17.747627 containerd[1591]: time="2026-01-20T06:39:17.747579529Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:17.750515 containerd[1591]: time="2026-01-20T06:39:17.750445544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:17.751834 containerd[1591]: time="2026-01-20T06:39:17.751628856Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.693665605s" Jan 20 06:39:17.751834 containerd[1591]: time="2026-01-20T06:39:17.751673510Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 06:39:17.752755 containerd[1591]: time="2026-01-20T06:39:17.752558804Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 06:39:17.843002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 06:39:17.846403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:39:18.068992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:39:18.081487 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:39:18.140119 kubelet[2095]: E0120 06:39:18.140024 2095 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:39:18.144733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:39:18.144970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:39:18.146550 systemd[1]: kubelet.service: Consumed 252ms CPU time, 110.6M memory peak. Jan 20 06:39:18.838163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount310065958.mount: Deactivated successfully. Jan 20 06:39:19.439832 containerd[1591]: time="2026-01-20T06:39:19.439162343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:19.440632 containerd[1591]: time="2026-01-20T06:39:19.440597370Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 20 06:39:19.440974 containerd[1591]: time="2026-01-20T06:39:19.440945196Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:19.442899 containerd[1591]: time="2026-01-20T06:39:19.442859067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:19.443733 containerd[1591]: time="2026-01-20T06:39:19.443363371Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.690772199s" Jan 20 06:39:19.443733 containerd[1591]: time="2026-01-20T06:39:19.443400451Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 06:39:19.443896 containerd[1591]: time="2026-01-20T06:39:19.443879356Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 06:39:19.445060 systemd-resolved[1267]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 20 06:39:20.249972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3872401912.mount: Deactivated successfully. Jan 20 06:39:21.108284 containerd[1591]: time="2026-01-20T06:39:21.108211586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:21.109481 containerd[1591]: time="2026-01-20T06:39:21.109295314Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Jan 20 06:39:21.110056 containerd[1591]: time="2026-01-20T06:39:21.110015662Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:21.114319 containerd[1591]: time="2026-01-20T06:39:21.112633382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:21.114319 containerd[1591]: time="2026-01-20T06:39:21.113865772Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.669961926s" Jan 20 06:39:21.114319 containerd[1591]: time="2026-01-20T06:39:21.113903502Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 06:39:21.114650 containerd[1591]: time="2026-01-20T06:39:21.114605823Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 06:39:21.693166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3970748396.mount: Deactivated successfully. Jan 20 06:39:21.697857 containerd[1591]: time="2026-01-20T06:39:21.697287366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:39:21.698784 containerd[1591]: time="2026-01-20T06:39:21.698756464Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 06:39:21.699271 containerd[1591]: time="2026-01-20T06:39:21.699250955Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:39:21.702029 containerd[1591]: time="2026-01-20T06:39:21.701995379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:39:21.702696 containerd[1591]: time="2026-01-20T06:39:21.702591956Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 587.546607ms" Jan 20 06:39:21.702955 containerd[1591]: time="2026-01-20T06:39:21.702823192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 06:39:21.704047 containerd[1591]: time="2026-01-20T06:39:21.704005036Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 06:39:22.348433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3338067397.mount: Deactivated successfully. Jan 20 06:39:22.517067 systemd-resolved[1267]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 20 06:39:25.152837 containerd[1591]: time="2026-01-20T06:39:25.151862295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:25.153734 containerd[1591]: time="2026-01-20T06:39:25.153700480Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 20 06:39:25.154829 containerd[1591]: time="2026-01-20T06:39:25.154773759Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:25.158665 containerd[1591]: time="2026-01-20T06:39:25.158599364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:25.159959 containerd[1591]: time="2026-01-20T06:39:25.159913469Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.455858057s" Jan 20 06:39:25.159959 containerd[1591]: time="2026-01-20T06:39:25.159954448Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 06:39:28.342525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 06:39:28.347095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:39:28.538088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:39:28.548182 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:39:28.555286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:39:28.557686 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 06:39:28.558048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:39:28.558845 systemd[1]: kubelet.service: Consumed 145ms CPU time, 101.7M memory peak. Jan 20 06:39:28.567686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:39:28.600226 systemd[1]: Reload requested from client PID 2262 ('systemctl') (unit session-6.scope)... Jan 20 06:39:28.600416 systemd[1]: Reloading... Jan 20 06:39:28.773831 zram_generator::config[2326]: No configuration found. Jan 20 06:39:29.012008 systemd[1]: Reloading finished in 411 ms. Jan 20 06:39:29.072373 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:39:29.077759 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 06:39:29.078161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:39:29.078239 systemd[1]: kubelet.service: Consumed 135ms CPU time, 98.4M memory peak. Jan 20 06:39:29.081088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:39:29.255845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:39:29.271430 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 06:39:29.352143 kubelet[2364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:39:29.352143 kubelet[2364]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 06:39:29.352143 kubelet[2364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:39:29.352659 kubelet[2364]: I0120 06:39:29.352302 2364 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 06:39:29.637480 kubelet[2364]: I0120 06:39:29.637396 2364 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 06:39:29.637480 kubelet[2364]: I0120 06:39:29.637460 2364 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 06:39:29.638041 kubelet[2364]: I0120 06:39:29.638003 2364 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 06:39:29.677411 kubelet[2364]: E0120 06:39:29.677356 2364 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://164.92.87.233:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.92.87.233:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:39:29.679530 kubelet[2364]: I0120 06:39:29.677577 2364 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 06:39:29.699268 kubelet[2364]: I0120 06:39:29.699234 2364 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 06:39:29.704297 kubelet[2364]: I0120 06:39:29.704257 2364 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 06:39:29.706921 kubelet[2364]: I0120 06:39:29.706819 2364 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 06:39:29.707101 kubelet[2364]: I0120 06:39:29.706898 2364 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4585.0.0-n-f46ee37080","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 06:39:29.708879 kubelet[2364]: I0120 06:39:29.708826 2364 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 06:39:29.708879 kubelet[2364]: I0120 06:39:29.708870 2364 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 06:39:29.709085 kubelet[2364]: I0120 06:39:29.709036 2364 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:39:29.713038 kubelet[2364]: I0120 06:39:29.712995 2364 kubelet.go:446] "Attempting to sync node with API server" Jan 20 06:39:29.713038 kubelet[2364]: I0120 06:39:29.713046 2364 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 06:39:29.714136 kubelet[2364]: I0120 06:39:29.713584 2364 kubelet.go:352] "Adding apiserver pod source" Jan 20 06:39:29.714136 kubelet[2364]: I0120 06:39:29.713613 2364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 06:39:29.719405 kubelet[2364]: W0120 06:39:29.719325 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.87.233:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4585.0.0-n-f46ee37080&limit=500&resourceVersion=0": dial tcp 164.92.87.233:6443: connect: connection refused Jan 20 06:39:29.720097 kubelet[2364]: E0120 06:39:29.720058 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://164.92.87.233:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4585.0.0-n-f46ee37080&limit=500&resourceVersion=0\": dial tcp 164.92.87.233:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:39:29.721816 kubelet[2364]: I0120 06:39:29.721751 2364 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 06:39:29.726133 kubelet[2364]: I0120 06:39:29.725856 2364 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 06:39:29.726672 kubelet[2364]: W0120 06:39:29.726636 2364 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 06:39:29.726929 kubelet[2364]: W0120 06:39:29.726675 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.87.233:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 164.92.87.233:6443: connect: connection refused Jan 20 06:39:29.727081 kubelet[2364]: E0120 06:39:29.727058 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://164.92.87.233:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.87.233:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:39:29.728298 kubelet[2364]: I0120 06:39:29.728020 2364 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 06:39:29.728298 kubelet[2364]: I0120 06:39:29.728065 2364 server.go:1287] "Started kubelet" Jan 20 06:39:29.729410 kubelet[2364]: I0120 06:39:29.728966 2364 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 06:39:29.730750 kubelet[2364]: I0120 06:39:29.730222 2364 server.go:479] "Adding debug handlers to kubelet server" Jan 20 06:39:29.733211 kubelet[2364]: I0120 06:39:29.733154 2364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 06:39:29.733611 kubelet[2364]: I0120 06:39:29.733595 2364 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 06:39:29.733842 kubelet[2364]: I0120 06:39:29.733820 2364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 06:39:29.742885 kubelet[2364]: E0120 06:39:29.735338 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.87.233:6443/api/v1/namespaces/default/events\": dial tcp 164.92.87.233:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4585.0.0-n-f46ee37080.188c5d2599f7068e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4585.0.0-n-f46ee37080,UID:ci-4585.0.0-n-f46ee37080,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4585.0.0-n-f46ee37080,},FirstTimestamp:2026-01-20 06:39:29.728038542 +0000 UTC m=+0.449852650,LastTimestamp:2026-01-20 06:39:29.728038542 +0000 UTC m=+0.449852650,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4585.0.0-n-f46ee37080,}" Jan 20 06:39:29.750659 kubelet[2364]: I0120 06:39:29.750283 2364 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 06:39:29.754847 kubelet[2364]: I0120 06:39:29.754075 2364 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 06:39:29.754847 kubelet[2364]: E0120 06:39:29.754469 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4585.0.0-n-f46ee37080\" not found" Jan 20 06:39:29.757907 kubelet[2364]: I0120 06:39:29.755720 2364 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 06:39:29.757907 kubelet[2364]: I0120 06:39:29.755830 2364 reconciler.go:26] "Reconciler: start to sync state" Jan 20 06:39:29.762694 kubelet[2364]: W0120 06:39:29.762616 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.87.233:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.87.233:6443: connect: connection refused Jan 20 06:39:29.762992 kubelet[2364]: E0120 06:39:29.762961 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://164.92.87.233:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.87.233:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:39:29.763205 kubelet[2364]: E0120 06:39:29.763179 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.87.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4585.0.0-n-f46ee37080?timeout=10s\": dial tcp 164.92.87.233:6443: connect: connection refused" interval="200ms" Jan 20 06:39:29.767532 kubelet[2364]: I0120 06:39:29.767491 2364 factory.go:221] Registration of the systemd container factory successfully Jan 20 06:39:29.767886 kubelet[2364]: I0120 06:39:29.767858 2364 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 06:39:29.774610 kubelet[2364]: I0120 06:39:29.774575 2364 factory.go:221] Registration of the containerd container factory successfully Jan 20 06:39:29.804301 kubelet[2364]: E0120 06:39:29.804265 2364 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 06:39:29.811633 kubelet[2364]: I0120 06:39:29.811595 2364 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 06:39:29.811633 kubelet[2364]: I0120 06:39:29.811635 2364 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 06:39:29.811854 kubelet[2364]: I0120 06:39:29.811659 2364 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:39:29.812507 kubelet[2364]: I0120 06:39:29.812468 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 06:39:29.813953 kubelet[2364]: I0120 06:39:29.813930 2364 policy_none.go:49] "None policy: Start" Jan 20 06:39:29.814154 kubelet[2364]: I0120 06:39:29.814139 2364 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 06:39:29.814261 kubelet[2364]: I0120 06:39:29.814249 2364 state_mem.go:35] "Initializing new in-memory state store" Jan 20 06:39:29.815310 kubelet[2364]: I0120 06:39:29.815166 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 06:39:29.815310 kubelet[2364]: I0120 06:39:29.815197 2364 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 06:39:29.815310 kubelet[2364]: I0120 06:39:29.815226 2364 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 06:39:29.815310 kubelet[2364]: I0120 06:39:29.815236 2364 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 06:39:29.815531 kubelet[2364]: E0120 06:39:29.815310 2364 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 06:39:29.818260 kubelet[2364]: W0120 06:39:29.818037 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.87.233:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.87.233:6443: connect: connection refused Jan 20 06:39:29.818260 kubelet[2364]: E0120 06:39:29.818099 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://164.92.87.233:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.92.87.233:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:39:29.826584 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 06:39:29.846196 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 06:39:29.851904 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 06:39:29.854900 kubelet[2364]: E0120 06:39:29.854847 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4585.0.0-n-f46ee37080\" not found" Jan 20 06:39:29.862626 kubelet[2364]: I0120 06:39:29.862575 2364 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 06:39:29.862926 kubelet[2364]: I0120 06:39:29.862834 2364 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 06:39:29.862926 kubelet[2364]: I0120 06:39:29.862855 2364 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 06:39:29.863568 kubelet[2364]: I0120 06:39:29.863538 2364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 06:39:29.867345 kubelet[2364]: E0120 06:39:29.867213 2364 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 06:39:29.867345 kubelet[2364]: E0120 06:39:29.867291 2364 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4585.0.0-n-f46ee37080\" not found" Jan 20 06:39:29.929200 systemd[1]: Created slice kubepods-burstable-pod833f2805c0b1b5a37a767887155e81d7.slice - libcontainer container kubepods-burstable-pod833f2805c0b1b5a37a767887155e81d7.slice. Jan 20 06:39:29.951071 kubelet[2364]: E0120 06:39:29.950766 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-f46ee37080\" not found" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.954448 systemd[1]: Created slice kubepods-burstable-podc5131459b9e32e755408d95f91c57fa6.slice - libcontainer container kubepods-burstable-podc5131459b9e32e755408d95f91c57fa6.slice. Jan 20 06:39:29.956520 kubelet[2364]: I0120 06:39:29.956470 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/833f2805c0b1b5a37a767887155e81d7-ca-certs\") pod \"kube-apiserver-ci-4585.0.0-n-f46ee37080\" (UID: \"833f2805c0b1b5a37a767887155e81d7\") " pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.956892 kubelet[2364]: I0120 06:39:29.956720 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/833f2805c0b1b5a37a767887155e81d7-k8s-certs\") pod \"kube-apiserver-ci-4585.0.0-n-f46ee37080\" (UID: \"833f2805c0b1b5a37a767887155e81d7\") " pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.956892 kubelet[2364]: I0120 06:39:29.956779 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c5131459b9e32e755408d95f91c57fa6-flexvolume-dir\") pod \"kube-controller-manager-ci-4585.0.0-n-f46ee37080\" (UID: \"c5131459b9e32e755408d95f91c57fa6\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.956892 kubelet[2364]: I0120 06:39:29.956820 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5131459b9e32e755408d95f91c57fa6-k8s-certs\") pod \"kube-controller-manager-ci-4585.0.0-n-f46ee37080\" (UID: \"c5131459b9e32e755408d95f91c57fa6\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.956892 kubelet[2364]: I0120 06:39:29.956845 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5131459b9e32e755408d95f91c57fa6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4585.0.0-n-f46ee37080\" (UID: \"c5131459b9e32e755408d95f91c57fa6\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.957233 kubelet[2364]: I0120 06:39:29.957090 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/833f2805c0b1b5a37a767887155e81d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4585.0.0-n-f46ee37080\" (UID: \"833f2805c0b1b5a37a767887155e81d7\") " pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.957233 kubelet[2364]: I0120 06:39:29.957128 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5131459b9e32e755408d95f91c57fa6-ca-certs\") pod \"kube-controller-manager-ci-4585.0.0-n-f46ee37080\" (UID: \"c5131459b9e32e755408d95f91c57fa6\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.957233 kubelet[2364]: I0120 06:39:29.957175 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5131459b9e32e755408d95f91c57fa6-kubeconfig\") pod \"kube-controller-manager-ci-4585.0.0-n-f46ee37080\" (UID: \"c5131459b9e32e755408d95f91c57fa6\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.957233 kubelet[2364]: I0120 06:39:29.957199 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e1d81878d6ebacadfdcf6a750730b1f-kubeconfig\") pod \"kube-scheduler-ci-4585.0.0-n-f46ee37080\" (UID: \"2e1d81878d6ebacadfdcf6a750730b1f\") " pod="kube-system/kube-scheduler-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.958953 kubelet[2364]: E0120 06:39:29.958909 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-f46ee37080\" not found" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.964560 kubelet[2364]: I0120 06:39:29.964461 2364 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.964560 kubelet[2364]: E0120 06:39:29.964531 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.87.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4585.0.0-n-f46ee37080?timeout=10s\": dial tcp 164.92.87.233:6443: connect: connection refused" interval="400ms" Jan 20 06:39:29.965236 kubelet[2364]: E0120 06:39:29.965200 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.87.233:6443/api/v1/nodes\": dial tcp 164.92.87.233:6443: connect: connection refused" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:29.970746 systemd[1]: Created slice kubepods-burstable-pod2e1d81878d6ebacadfdcf6a750730b1f.slice - libcontainer container kubepods-burstable-pod2e1d81878d6ebacadfdcf6a750730b1f.slice. Jan 20 06:39:29.974058 kubelet[2364]: E0120 06:39:29.973940 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-f46ee37080\" not found" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:30.167201 kubelet[2364]: I0120 06:39:30.167165 2364 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:30.167660 kubelet[2364]: E0120 06:39:30.167623 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.87.233:6443/api/v1/nodes\": dial tcp 164.92.87.233:6443: connect: connection refused" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:30.251983 kubelet[2364]: E0120 06:39:30.251409 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:30.253892 containerd[1591]: time="2026-01-20T06:39:30.253848945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4585.0.0-n-f46ee37080,Uid:833f2805c0b1b5a37a767887155e81d7,Namespace:kube-system,Attempt:0,}" Jan 20 06:39:30.259844 kubelet[2364]: E0120 06:39:30.259786 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:30.260415 containerd[1591]: time="2026-01-20T06:39:30.260362534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4585.0.0-n-f46ee37080,Uid:c5131459b9e32e755408d95f91c57fa6,Namespace:kube-system,Attempt:0,}" Jan 20 06:39:30.274737 kubelet[2364]: E0120 06:39:30.274697 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:30.278833 containerd[1591]: time="2026-01-20T06:39:30.276858524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4585.0.0-n-f46ee37080,Uid:2e1d81878d6ebacadfdcf6a750730b1f,Namespace:kube-system,Attempt:0,}" Jan 20 06:39:30.368010 kubelet[2364]: E0120 06:39:30.367888 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.87.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4585.0.0-n-f46ee37080?timeout=10s\": dial tcp 164.92.87.233:6443: connect: connection refused" interval="800ms" Jan 20 06:39:30.392621 containerd[1591]: time="2026-01-20T06:39:30.392322813Z" level=info msg="connecting to shim e727285628d401e0ce1ec786612e9d79a9f3110df0f86490bc728113184e0609" address="unix:///run/containerd/s/60f9779851ae1dad69cd937c42132f22a93a457d42c31ed0371a2125bc9cc60b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:30.394040 containerd[1591]: time="2026-01-20T06:39:30.393983605Z" level=info msg="connecting to shim 850f7a23aff152e7808bc697ab49be1a664c824f9313f08e663d548e113e1d06" address="unix:///run/containerd/s/ec7c96c6d18b4ed49b58490317bd8cc8c0634c538bf2c3ac3b95ee52d9c70abe" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:30.399083 containerd[1591]: time="2026-01-20T06:39:30.399018265Z" level=info msg="connecting to shim d7476b9e00804faf3dd8459fe8fa9d19965c9de0aa2f0c59621eb96c31297a17" address="unix:///run/containerd/s/2a15380cbbc46c36fa5ea24cf0ec8d6c186880bda9f92230506f6dbc24b2d7c9" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:30.528196 systemd[1]: Started cri-containerd-850f7a23aff152e7808bc697ab49be1a664c824f9313f08e663d548e113e1d06.scope - libcontainer container 850f7a23aff152e7808bc697ab49be1a664c824f9313f08e663d548e113e1d06. Jan 20 06:39:30.530210 systemd[1]: Started cri-containerd-d7476b9e00804faf3dd8459fe8fa9d19965c9de0aa2f0c59621eb96c31297a17.scope - libcontainer container d7476b9e00804faf3dd8459fe8fa9d19965c9de0aa2f0c59621eb96c31297a17. Jan 20 06:39:30.532352 systemd[1]: Started cri-containerd-e727285628d401e0ce1ec786612e9d79a9f3110df0f86490bc728113184e0609.scope - libcontainer container e727285628d401e0ce1ec786612e9d79a9f3110df0f86490bc728113184e0609. Jan 20 06:39:30.570195 kubelet[2364]: I0120 06:39:30.570144 2364 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:30.571355 kubelet[2364]: E0120 06:39:30.570617 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.87.233:6443/api/v1/nodes\": dial tcp 164.92.87.233:6443: connect: connection refused" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:30.650942 containerd[1591]: time="2026-01-20T06:39:30.650890088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4585.0.0-n-f46ee37080,Uid:833f2805c0b1b5a37a767887155e81d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7476b9e00804faf3dd8459fe8fa9d19965c9de0aa2f0c59621eb96c31297a17\"" Jan 20 06:39:30.653639 kubelet[2364]: E0120 06:39:30.653608 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:30.656188 containerd[1591]: time="2026-01-20T06:39:30.656052682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4585.0.0-n-f46ee37080,Uid:c5131459b9e32e755408d95f91c57fa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"850f7a23aff152e7808bc697ab49be1a664c824f9313f08e663d548e113e1d06\"" Jan 20 06:39:30.659408 kubelet[2364]: E0120 06:39:30.659293 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:30.661661 containerd[1591]: time="2026-01-20T06:39:30.661598381Z" level=info msg="CreateContainer within sandbox \"d7476b9e00804faf3dd8459fe8fa9d19965c9de0aa2f0c59621eb96c31297a17\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 06:39:30.666833 containerd[1591]: time="2026-01-20T06:39:30.666411292Z" level=info msg="CreateContainer within sandbox \"850f7a23aff152e7808bc697ab49be1a664c824f9313f08e663d548e113e1d06\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 06:39:30.678178 containerd[1591]: time="2026-01-20T06:39:30.678061319Z" level=info msg="Container e62ba1bd0c785903ec9662814660f13fda81f586c4cd674cc6d45cbcbbee7b13: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:39:30.685853 containerd[1591]: time="2026-01-20T06:39:30.685017057Z" level=info msg="Container 0ea963455e9e01890cdafbfe018bbdd630acd33f3d53720bd5d31da5a228309a: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:39:30.690336 containerd[1591]: time="2026-01-20T06:39:30.690279007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4585.0.0-n-f46ee37080,Uid:2e1d81878d6ebacadfdcf6a750730b1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e727285628d401e0ce1ec786612e9d79a9f3110df0f86490bc728113184e0609\"" Jan 20 06:39:30.691225 containerd[1591]: time="2026-01-20T06:39:30.691181729Z" level=info msg="CreateContainer within sandbox \"d7476b9e00804faf3dd8459fe8fa9d19965c9de0aa2f0c59621eb96c31297a17\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e62ba1bd0c785903ec9662814660f13fda81f586c4cd674cc6d45cbcbbee7b13\"" Jan 20 06:39:30.691929 kubelet[2364]: E0120 06:39:30.691892 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:30.692312 containerd[1591]: time="2026-01-20T06:39:30.692279999Z" level=info msg="StartContainer for \"e62ba1bd0c785903ec9662814660f13fda81f586c4cd674cc6d45cbcbbee7b13\"" Jan 20 06:39:30.695773 containerd[1591]: time="2026-01-20T06:39:30.695727900Z" level=info msg="connecting to shim e62ba1bd0c785903ec9662814660f13fda81f586c4cd674cc6d45cbcbbee7b13" address="unix:///run/containerd/s/2a15380cbbc46c36fa5ea24cf0ec8d6c186880bda9f92230506f6dbc24b2d7c9" protocol=ttrpc version=3 Jan 20 06:39:30.697593 containerd[1591]: time="2026-01-20T06:39:30.697519804Z" level=info msg="CreateContainer within sandbox \"850f7a23aff152e7808bc697ab49be1a664c824f9313f08e663d548e113e1d06\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0ea963455e9e01890cdafbfe018bbdd630acd33f3d53720bd5d31da5a228309a\"" Jan 20 06:39:30.698138 containerd[1591]: time="2026-01-20T06:39:30.698111409Z" level=info msg="StartContainer for \"0ea963455e9e01890cdafbfe018bbdd630acd33f3d53720bd5d31da5a228309a\"" Jan 20 06:39:30.699562 containerd[1591]: time="2026-01-20T06:39:30.699521374Z" level=info msg="connecting to shim 0ea963455e9e01890cdafbfe018bbdd630acd33f3d53720bd5d31da5a228309a" address="unix:///run/containerd/s/ec7c96c6d18b4ed49b58490317bd8cc8c0634c538bf2c3ac3b95ee52d9c70abe" protocol=ttrpc version=3 Jan 20 06:39:30.701402 containerd[1591]: time="2026-01-20T06:39:30.700932971Z" level=info msg="CreateContainer within sandbox \"e727285628d401e0ce1ec786612e9d79a9f3110df0f86490bc728113184e0609\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 06:39:30.716235 containerd[1591]: time="2026-01-20T06:39:30.716048097Z" level=info msg="Container 709a10216cb1e846bfae465bbd33329c438703dee6f810cf7ed47fc7b78f354d: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:39:30.732567 containerd[1591]: time="2026-01-20T06:39:30.732522559Z" level=info msg="CreateContainer within sandbox \"e727285628d401e0ce1ec786612e9d79a9f3110df0f86490bc728113184e0609\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"709a10216cb1e846bfae465bbd33329c438703dee6f810cf7ed47fc7b78f354d\"" Jan 20 06:39:30.733675 containerd[1591]: time="2026-01-20T06:39:30.733635826Z" level=info msg="StartContainer for \"709a10216cb1e846bfae465bbd33329c438703dee6f810cf7ed47fc7b78f354d\"" Jan 20 06:39:30.735088 systemd[1]: Started cri-containerd-0ea963455e9e01890cdafbfe018bbdd630acd33f3d53720bd5d31da5a228309a.scope - libcontainer container 0ea963455e9e01890cdafbfe018bbdd630acd33f3d53720bd5d31da5a228309a. Jan 20 06:39:30.738683 containerd[1591]: time="2026-01-20T06:39:30.737792152Z" level=info msg="connecting to shim 709a10216cb1e846bfae465bbd33329c438703dee6f810cf7ed47fc7b78f354d" address="unix:///run/containerd/s/60f9779851ae1dad69cd937c42132f22a93a457d42c31ed0371a2125bc9cc60b" protocol=ttrpc version=3 Jan 20 06:39:30.745159 systemd[1]: Started cri-containerd-e62ba1bd0c785903ec9662814660f13fda81f586c4cd674cc6d45cbcbbee7b13.scope - libcontainer container e62ba1bd0c785903ec9662814660f13fda81f586c4cd674cc6d45cbcbbee7b13. Jan 20 06:39:30.774162 systemd[1]: Started cri-containerd-709a10216cb1e846bfae465bbd33329c438703dee6f810cf7ed47fc7b78f354d.scope - libcontainer container 709a10216cb1e846bfae465bbd33329c438703dee6f810cf7ed47fc7b78f354d. Jan 20 06:39:30.875789 kubelet[2364]: W0120 06:39:30.875590 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.87.233:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.87.233:6443: connect: connection refused Jan 20 06:39:30.876036 kubelet[2364]: E0120 06:39:30.876005 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://164.92.87.233:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.92.87.233:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:39:30.888186 containerd[1591]: time="2026-01-20T06:39:30.888037067Z" level=info msg="StartContainer for \"0ea963455e9e01890cdafbfe018bbdd630acd33f3d53720bd5d31da5a228309a\" returns successfully" Jan 20 06:39:30.889322 containerd[1591]: time="2026-01-20T06:39:30.888940064Z" level=info msg="StartContainer for \"e62ba1bd0c785903ec9662814660f13fda81f586c4cd674cc6d45cbcbbee7b13\" returns successfully" Jan 20 06:39:30.911946 containerd[1591]: time="2026-01-20T06:39:30.911882187Z" level=info msg="StartContainer for \"709a10216cb1e846bfae465bbd33329c438703dee6f810cf7ed47fc7b78f354d\" returns successfully" Jan 20 06:39:30.917200 kubelet[2364]: W0120 06:39:30.917096 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.87.233:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 164.92.87.233:6443: connect: connection refused Jan 20 06:39:30.917632 kubelet[2364]: E0120 06:39:30.917362 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://164.92.87.233:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.87.233:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:39:30.983624 kubelet[2364]: W0120 06:39:30.983516 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.87.233:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.87.233:6443: connect: connection refused Jan 20 06:39:30.984238 kubelet[2364]: E0120 06:39:30.983861 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://164.92.87.233:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.87.233:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:39:31.169083 kubelet[2364]: E0120 06:39:31.168954 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.87.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4585.0.0-n-f46ee37080?timeout=10s\": dial tcp 164.92.87.233:6443: connect: connection refused" interval="1.6s" Jan 20 06:39:31.372761 kubelet[2364]: I0120 06:39:31.372705 2364 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:31.870714 kubelet[2364]: E0120 06:39:31.870105 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-f46ee37080\" not found" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:31.870714 kubelet[2364]: E0120 06:39:31.870252 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:31.873853 kubelet[2364]: E0120 06:39:31.873764 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-f46ee37080\" not found" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:31.874296 kubelet[2364]: E0120 06:39:31.874192 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:31.876334 kubelet[2364]: E0120 06:39:31.876302 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-f46ee37080\" not found" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:31.876476 kubelet[2364]: E0120 06:39:31.876441 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:32.879838 kubelet[2364]: E0120 06:39:32.878457 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-f46ee37080\" not found" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:32.879838 kubelet[2364]: E0120 06:39:32.878589 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:32.879838 kubelet[2364]: E0120 06:39:32.878945 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-f46ee37080\" not found" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:32.879838 kubelet[2364]: E0120 06:39:32.879077 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:32.880701 kubelet[2364]: E0120 06:39:32.880681 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-f46ee37080\" not found" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:32.881051 kubelet[2364]: E0120 06:39:32.881036 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:33.141320 kubelet[2364]: E0120 06:39:33.141095 2364 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4585.0.0-n-f46ee37080\" not found" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.334992 kubelet[2364]: I0120 06:39:33.334894 2364 kubelet_node_status.go:78] "Successfully registered node" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.356838 kubelet[2364]: I0120 06:39:33.356240 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.370836 kubelet[2364]: E0120 06:39:33.370264 2364 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4585.0.0-n-f46ee37080\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.370836 kubelet[2364]: I0120 06:39:33.370300 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.376850 kubelet[2364]: E0120 06:39:33.376010 2364 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4585.0.0-n-f46ee37080\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.377153 kubelet[2364]: I0120 06:39:33.376951 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.381217 kubelet[2364]: E0120 06:39:33.381177 2364 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4585.0.0-n-f46ee37080\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.728204 kubelet[2364]: I0120 06:39:33.728067 2364 apiserver.go:52] "Watching apiserver" Jan 20 06:39:33.756608 kubelet[2364]: I0120 06:39:33.756554 2364 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 06:39:33.879271 kubelet[2364]: I0120 06:39:33.879227 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.879495 kubelet[2364]: I0120 06:39:33.879477 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.879967 kubelet[2364]: I0120 06:39:33.879938 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.882829 kubelet[2364]: E0120 06:39:33.882774 2364 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4585.0.0-n-f46ee37080\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.882963 kubelet[2364]: E0120 06:39:33.882954 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:33.883519 kubelet[2364]: E0120 06:39:33.882774 2364 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4585.0.0-n-f46ee37080\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.883519 kubelet[2364]: E0120 06:39:33.883076 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:33.883986 kubelet[2364]: E0120 06:39:33.883946 2364 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4585.0.0-n-f46ee37080\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:33.884099 kubelet[2364]: E0120 06:39:33.884082 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:35.263362 systemd[1]: Reload requested from client PID 2632 ('systemctl') (unit session-6.scope)... Jan 20 06:39:35.263382 systemd[1]: Reloading... Jan 20 06:39:35.393846 zram_generator::config[2681]: No configuration found. Jan 20 06:39:35.608996 kubelet[2364]: I0120 06:39:35.608663 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:35.618588 kubelet[2364]: W0120 06:39:35.618134 2364 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 06:39:35.618974 kubelet[2364]: E0120 06:39:35.618857 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:35.819452 systemd[1]: Reloading finished in 555 ms. Jan 20 06:39:35.849341 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:39:35.871723 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 06:39:35.872709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:39:35.872991 systemd[1]: kubelet.service: Consumed 962ms CPU time, 127M memory peak. Jan 20 06:39:35.876494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:39:36.087961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:39:36.101320 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 06:39:36.179144 kubelet[2729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:39:36.180033 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 06:39:36.180033 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:39:36.181828 kubelet[2729]: I0120 06:39:36.180917 2729 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 06:39:36.189875 kubelet[2729]: I0120 06:39:36.189789 2729 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 06:39:36.189875 kubelet[2729]: I0120 06:39:36.189841 2729 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 06:39:36.190294 kubelet[2729]: I0120 06:39:36.190272 2729 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 06:39:36.192615 kubelet[2729]: I0120 06:39:36.192568 2729 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 06:39:36.200600 kubelet[2729]: I0120 06:39:36.200544 2729 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 06:39:36.210957 kubelet[2729]: I0120 06:39:36.210247 2729 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 06:39:36.221916 kubelet[2729]: I0120 06:39:36.221089 2729 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 06:39:36.221916 kubelet[2729]: I0120 06:39:36.221379 2729 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 06:39:36.221916 kubelet[2729]: I0120 06:39:36.221421 2729 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4585.0.0-n-f46ee37080","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 06:39:36.221916 kubelet[2729]: I0120 06:39:36.221684 2729 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 06:39:36.222260 kubelet[2729]: I0120 06:39:36.221695 2729 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 06:39:36.222260 kubelet[2729]: I0120 06:39:36.221755 2729 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:39:36.222496 kubelet[2729]: I0120 06:39:36.222473 2729 kubelet.go:446] "Attempting to sync node with API server" Jan 20 06:39:36.222603 kubelet[2729]: I0120 06:39:36.222593 2729 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 06:39:36.222674 kubelet[2729]: I0120 06:39:36.222667 2729 kubelet.go:352] "Adding apiserver pod source" Jan 20 06:39:36.222739 kubelet[2729]: I0120 06:39:36.222730 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 06:39:36.227971 kubelet[2729]: I0120 06:39:36.227939 2729 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 06:39:36.228840 kubelet[2729]: I0120 06:39:36.228538 2729 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 06:39:36.230332 kubelet[2729]: I0120 06:39:36.230310 2729 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 06:39:36.232823 kubelet[2729]: I0120 06:39:36.230449 2729 server.go:1287] "Started kubelet" Jan 20 06:39:36.234335 kubelet[2729]: I0120 06:39:36.234312 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 06:39:36.243453 kubelet[2729]: I0120 06:39:36.243397 2729 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 06:39:36.246037 kubelet[2729]: I0120 06:39:36.246009 2729 server.go:479] "Adding debug handlers to kubelet server" Jan 20 06:39:36.248701 kubelet[2729]: I0120 06:39:36.248627 2729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 06:39:36.249061 kubelet[2729]: I0120 06:39:36.249043 2729 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 06:39:36.249451 kubelet[2729]: I0120 06:39:36.249424 2729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 06:39:36.255699 kubelet[2729]: I0120 06:39:36.255573 2729 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 06:39:36.259586 kubelet[2729]: E0120 06:39:36.259143 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4585.0.0-n-f46ee37080\" not found" Jan 20 06:39:36.264556 kubelet[2729]: I0120 06:39:36.264531 2729 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 06:39:36.267851 kubelet[2729]: I0120 06:39:36.267480 2729 reconciler.go:26] "Reconciler: start to sync state" Jan 20 06:39:36.271168 kubelet[2729]: I0120 06:39:36.271119 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 06:39:36.274973 kubelet[2729]: I0120 06:39:36.274938 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 06:39:36.275476 kubelet[2729]: I0120 06:39:36.275111 2729 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 06:39:36.275476 kubelet[2729]: I0120 06:39:36.275137 2729 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 06:39:36.275476 kubelet[2729]: I0120 06:39:36.275144 2729 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 06:39:36.275476 kubelet[2729]: E0120 06:39:36.275199 2729 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 06:39:36.283536 kubelet[2729]: I0120 06:39:36.283494 2729 factory.go:221] Registration of the systemd container factory successfully Jan 20 06:39:36.285100 kubelet[2729]: I0120 06:39:36.284990 2729 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 06:39:36.290702 kubelet[2729]: E0120 06:39:36.290098 2729 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 06:39:36.293823 kubelet[2729]: I0120 06:39:36.292033 2729 factory.go:221] Registration of the containerd container factory successfully Jan 20 06:39:36.348480 kubelet[2729]: I0120 06:39:36.348441 2729 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 06:39:36.348716 kubelet[2729]: I0120 06:39:36.348696 2729 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 06:39:36.348856 kubelet[2729]: I0120 06:39:36.348844 2729 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:39:36.349217 kubelet[2729]: I0120 06:39:36.349189 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 06:39:36.349401 kubelet[2729]: I0120 06:39:36.349357 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 06:39:36.349485 kubelet[2729]: I0120 06:39:36.349474 2729 policy_none.go:49] "None policy: Start" Jan 20 06:39:36.349556 kubelet[2729]: I0120 06:39:36.349545 2729 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 06:39:36.349642 kubelet[2729]: I0120 06:39:36.349631 2729 state_mem.go:35] "Initializing new in-memory state store" Jan 20 06:39:36.349988 kubelet[2729]: I0120 06:39:36.349968 2729 state_mem.go:75] "Updated machine memory state" Jan 20 06:39:36.356520 kubelet[2729]: I0120 06:39:36.356486 2729 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 06:39:36.356735 kubelet[2729]: I0120 06:39:36.356718 2729 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 06:39:36.356827 kubelet[2729]: I0120 06:39:36.356737 2729 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 06:39:36.357173 kubelet[2729]: I0120 06:39:36.357145 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 06:39:36.360294 kubelet[2729]: E0120 06:39:36.360271 2729 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 06:39:36.377857 kubelet[2729]: I0120 06:39:36.377789 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.378283 kubelet[2729]: I0120 06:39:36.378261 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.378539 kubelet[2729]: I0120 06:39:36.378056 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.387867 kubelet[2729]: W0120 06:39:36.387837 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 06:39:36.389355 kubelet[2729]: W0120 06:39:36.389292 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 06:39:36.391278 kubelet[2729]: W0120 06:39:36.391260 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 06:39:36.391444 kubelet[2729]: E0120 06:39:36.391430 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4585.0.0-n-f46ee37080\" already exists" pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.458556 kubelet[2729]: I0120 06:39:36.458440 2729 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.469074 kubelet[2729]: I0120 06:39:36.468668 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5131459b9e32e755408d95f91c57fa6-kubeconfig\") pod \"kube-controller-manager-ci-4585.0.0-n-f46ee37080\" (UID: \"c5131459b9e32e755408d95f91c57fa6\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.469074 kubelet[2729]: I0120 06:39:36.468751 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5131459b9e32e755408d95f91c57fa6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4585.0.0-n-f46ee37080\" (UID: \"c5131459b9e32e755408d95f91c57fa6\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.469074 kubelet[2729]: I0120 06:39:36.468889 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/833f2805c0b1b5a37a767887155e81d7-ca-certs\") pod \"kube-apiserver-ci-4585.0.0-n-f46ee37080\" (UID: \"833f2805c0b1b5a37a767887155e81d7\") " pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.469074 kubelet[2729]: I0120 06:39:36.468931 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/833f2805c0b1b5a37a767887155e81d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4585.0.0-n-f46ee37080\" (UID: \"833f2805c0b1b5a37a767887155e81d7\") " pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.469074 kubelet[2729]: I0120 06:39:36.468990 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5131459b9e32e755408d95f91c57fa6-ca-certs\") pod \"kube-controller-manager-ci-4585.0.0-n-f46ee37080\" (UID: \"c5131459b9e32e755408d95f91c57fa6\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.469361 kubelet[2729]: I0120 06:39:36.469040 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5131459b9e32e755408d95f91c57fa6-k8s-certs\") pod \"kube-controller-manager-ci-4585.0.0-n-f46ee37080\" (UID: \"c5131459b9e32e755408d95f91c57fa6\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.469361 kubelet[2729]: I0120 06:39:36.469065 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/833f2805c0b1b5a37a767887155e81d7-k8s-certs\") pod \"kube-apiserver-ci-4585.0.0-n-f46ee37080\" (UID: \"833f2805c0b1b5a37a767887155e81d7\") " pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.469361 kubelet[2729]: I0120 06:39:36.469129 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c5131459b9e32e755408d95f91c57fa6-flexvolume-dir\") pod \"kube-controller-manager-ci-4585.0.0-n-f46ee37080\" (UID: \"c5131459b9e32e755408d95f91c57fa6\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.469361 kubelet[2729]: I0120 06:39:36.469155 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e1d81878d6ebacadfdcf6a750730b1f-kubeconfig\") pod \"kube-scheduler-ci-4585.0.0-n-f46ee37080\" (UID: \"2e1d81878d6ebacadfdcf6a750730b1f\") " pod="kube-system/kube-scheduler-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.473656 kubelet[2729]: I0120 06:39:36.473596 2729 kubelet_node_status.go:124] "Node was previously registered" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.474401 kubelet[2729]: I0120 06:39:36.473847 2729 kubelet_node_status.go:78] "Successfully registered node" node="ci-4585.0.0-n-f46ee37080" Jan 20 06:39:36.689460 kubelet[2729]: E0120 06:39:36.688552 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:36.691914 kubelet[2729]: E0120 06:39:36.691521 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:36.691914 kubelet[2729]: E0120 06:39:36.691673 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:37.224211 kubelet[2729]: I0120 06:39:37.223921 2729 apiserver.go:52] "Watching apiserver" Jan 20 06:39:37.265010 kubelet[2729]: I0120 06:39:37.264932 2729 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 06:39:37.325984 kubelet[2729]: I0120 06:39:37.324999 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:37.325984 kubelet[2729]: E0120 06:39:37.325212 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:37.325128 sudo[1782]: pam_unix(sudo:session): session closed for user root Jan 20 06:39:37.326966 kubelet[2729]: I0120 06:39:37.326559 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:37.338944 kubelet[2729]: W0120 06:39:37.338911 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 06:39:37.339309 kubelet[2729]: W0120 06:39:37.339208 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 06:39:37.339309 kubelet[2729]: E0120 06:39:37.339278 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4585.0.0-n-f46ee37080\" already exists" pod="kube-system/kube-scheduler-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:37.340418 kubelet[2729]: E0120 06:39:37.340341 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:37.340729 kubelet[2729]: E0120 06:39:37.340710 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4585.0.0-n-f46ee37080\" already exists" pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" Jan 20 06:39:37.342098 kubelet[2729]: E0120 06:39:37.342073 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:37.395827 sshd[1781]: Connection closed by 20.161.92.111 port 37122 Jan 20 06:39:37.398086 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jan 20 06:39:37.404663 systemd[1]: sshd@4-164.92.87.233:22-20.161.92.111:37122.service: Deactivated successfully. Jan 20 06:39:37.410292 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 06:39:37.410814 kubelet[2729]: I0120 06:39:37.410737 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4585.0.0-n-f46ee37080" podStartSLOduration=1.410716211 podStartE2EDuration="1.410716211s" podCreationTimestamp="2026-01-20 06:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:39:37.378351824 +0000 UTC m=+1.269664417" watchObservedRunningTime="2026-01-20 06:39:37.410716211 +0000 UTC m=+1.302028798" Jan 20 06:39:37.412176 systemd[1]: session-6.scope: Consumed 4.811s CPU time, 159.4M memory peak. Jan 20 06:39:37.414769 systemd-logind[1565]: Session 6 logged out. Waiting for processes to exit. Jan 20 06:39:37.420986 systemd-logind[1565]: Removed session 6. Jan 20 06:39:37.444002 kubelet[2729]: I0120 06:39:37.443840 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4585.0.0-n-f46ee37080" podStartSLOduration=1.443792491 podStartE2EDuration="1.443792491s" podCreationTimestamp="2026-01-20 06:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:39:37.412306509 +0000 UTC m=+1.303619109" watchObservedRunningTime="2026-01-20 06:39:37.443792491 +0000 UTC m=+1.335105090" Jan 20 06:39:37.444190 kubelet[2729]: I0120 06:39:37.444014 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4585.0.0-n-f46ee37080" podStartSLOduration=2.44400433 podStartE2EDuration="2.44400433s" podCreationTimestamp="2026-01-20 06:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:39:37.443970464 +0000 UTC m=+1.335283057" watchObservedRunningTime="2026-01-20 06:39:37.44400433 +0000 UTC m=+1.335316924" Jan 20 06:39:38.327387 kubelet[2729]: E0120 06:39:38.326872 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:38.327387 kubelet[2729]: E0120 06:39:38.327317 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:38.434906 kubelet[2729]: E0120 06:39:38.434264 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:39.329845 kubelet[2729]: E0120 06:39:39.329789 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:40.564789 kubelet[2729]: E0120 06:39:40.564276 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:40.592539 kubelet[2729]: E0120 06:39:40.592498 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:41.333441 kubelet[2729]: E0120 06:39:41.333302 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:41.334227 kubelet[2729]: E0120 06:39:41.333806 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:41.343642 systemd-resolved[1267]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 20 06:39:42.326886 systemd-timesyncd[1464]: Contacted time server 73.185.182.209:123 (2.flatcar.pool.ntp.org). Jan 20 06:39:42.326928 systemd-resolved[1267]: Clock change detected. Flushing caches. Jan 20 06:39:42.326963 systemd-timesyncd[1464]: Initial clock synchronization to Tue 2026-01-20 06:39:42.326631 UTC. Jan 20 06:39:42.741474 kubelet[2729]: I0120 06:39:42.741346 2729 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 06:39:42.742195 containerd[1591]: time="2026-01-20T06:39:42.742152229Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 06:39:42.743639 kubelet[2729]: I0120 06:39:42.742530 2729 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 06:39:43.022093 kubelet[2729]: E0120 06:39:43.021863 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:43.720992 systemd[1]: Created slice kubepods-besteffort-podc6ab5b5c_f113_49c2_a111_c23dee3f1816.slice - libcontainer container kubepods-besteffort-podc6ab5b5c_f113_49c2_a111_c23dee3f1816.slice. Jan 20 06:39:43.742790 systemd[1]: Created slice kubepods-burstable-pod6c6a6142_9891_483a_a948_aec1a746c626.slice - libcontainer container kubepods-burstable-pod6c6a6142_9891_483a_a948_aec1a746c626.slice. Jan 20 06:39:43.798421 kubelet[2729]: I0120 06:39:43.798362 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6ab5b5c-f113-49c2-a111-c23dee3f1816-xtables-lock\") pod \"kube-proxy-mrsns\" (UID: \"c6ab5b5c-f113-49c2-a111-c23dee3f1816\") " pod="kube-system/kube-proxy-mrsns" Jan 20 06:39:43.799953 kubelet[2729]: I0120 06:39:43.799020 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c6a6142-9891-483a-a948-aec1a746c626-xtables-lock\") pod \"kube-flannel-ds-swzmf\" (UID: \"6c6a6142-9891-483a-a948-aec1a746c626\") " pod="kube-flannel/kube-flannel-ds-swzmf" Jan 20 06:39:43.799953 kubelet[2729]: I0120 06:39:43.799077 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6ab5b5c-f113-49c2-a111-c23dee3f1816-lib-modules\") pod \"kube-proxy-mrsns\" (UID: \"c6ab5b5c-f113-49c2-a111-c23dee3f1816\") " pod="kube-system/kube-proxy-mrsns" Jan 20 06:39:43.799953 kubelet[2729]: I0120 06:39:43.799801 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/6c6a6142-9891-483a-a948-aec1a746c626-cni-plugin\") pod \"kube-flannel-ds-swzmf\" (UID: \"6c6a6142-9891-483a-a948-aec1a746c626\") " pod="kube-flannel/kube-flannel-ds-swzmf" Jan 20 06:39:43.799953 kubelet[2729]: I0120 06:39:43.799833 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/6c6a6142-9891-483a-a948-aec1a746c626-flannel-cfg\") pod \"kube-flannel-ds-swzmf\" (UID: \"6c6a6142-9891-483a-a948-aec1a746c626\") " pod="kube-flannel/kube-flannel-ds-swzmf" Jan 20 06:39:43.799953 kubelet[2729]: I0120 06:39:43.799860 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkg59\" (UniqueName: \"kubernetes.io/projected/c6ab5b5c-f113-49c2-a111-c23dee3f1816-kube-api-access-lkg59\") pod \"kube-proxy-mrsns\" (UID: \"c6ab5b5c-f113-49c2-a111-c23dee3f1816\") " pod="kube-system/kube-proxy-mrsns" Jan 20 06:39:43.800223 kubelet[2729]: I0120 06:39:43.799876 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6c6a6142-9891-483a-a948-aec1a746c626-run\") pod \"kube-flannel-ds-swzmf\" (UID: \"6c6a6142-9891-483a-a948-aec1a746c626\") " pod="kube-flannel/kube-flannel-ds-swzmf" Jan 20 06:39:43.800223 kubelet[2729]: I0120 06:39:43.799922 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/6c6a6142-9891-483a-a948-aec1a746c626-cni\") pod \"kube-flannel-ds-swzmf\" (UID: \"6c6a6142-9891-483a-a948-aec1a746c626\") " pod="kube-flannel/kube-flannel-ds-swzmf" Jan 20 06:39:43.800223 kubelet[2729]: I0120 06:39:43.799969 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v84k\" (UniqueName: \"kubernetes.io/projected/6c6a6142-9891-483a-a948-aec1a746c626-kube-api-access-7v84k\") pod \"kube-flannel-ds-swzmf\" (UID: \"6c6a6142-9891-483a-a948-aec1a746c626\") " pod="kube-flannel/kube-flannel-ds-swzmf" Jan 20 06:39:43.800223 kubelet[2729]: I0120 06:39:43.800002 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c6ab5b5c-f113-49c2-a111-c23dee3f1816-kube-proxy\") pod \"kube-proxy-mrsns\" (UID: \"c6ab5b5c-f113-49c2-a111-c23dee3f1816\") " pod="kube-system/kube-proxy-mrsns" Jan 20 06:39:44.034218 kubelet[2729]: E0120 06:39:44.034165 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:44.035642 containerd[1591]: time="2026-01-20T06:39:44.035557746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mrsns,Uid:c6ab5b5c-f113-49c2-a111-c23dee3f1816,Namespace:kube-system,Attempt:0,}" Jan 20 06:39:44.049602 kubelet[2729]: E0120 06:39:44.049543 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:44.053100 containerd[1591]: time="2026-01-20T06:39:44.052753820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-swzmf,Uid:6c6a6142-9891-483a-a948-aec1a746c626,Namespace:kube-flannel,Attempt:0,}" Jan 20 06:39:44.062157 containerd[1591]: time="2026-01-20T06:39:44.062100759Z" level=info msg="connecting to shim f1219d075ef2d5800fccaeea09f82751135a49ee413ca4b30b0a4cb55dd72cda" address="unix:///run/containerd/s/744503bba57cf660379248c5eef1be8b3c0a353a262b1d615d7774c9bd898919" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:44.098307 containerd[1591]: time="2026-01-20T06:39:44.097519219Z" level=info msg="connecting to shim 1f2af6d7b2b6c213753e87968076b0f465ada490379b1fe68caa9f5014188cac" address="unix:///run/containerd/s/a4cc8a55b056eda753f5f0748bbcd9e94b6cda6e944e7eba4429ff0b2c7af0c1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:44.110388 systemd[1]: Started cri-containerd-f1219d075ef2d5800fccaeea09f82751135a49ee413ca4b30b0a4cb55dd72cda.scope - libcontainer container f1219d075ef2d5800fccaeea09f82751135a49ee413ca4b30b0a4cb55dd72cda. Jan 20 06:39:44.138036 systemd[1]: Started cri-containerd-1f2af6d7b2b6c213753e87968076b0f465ada490379b1fe68caa9f5014188cac.scope - libcontainer container 1f2af6d7b2b6c213753e87968076b0f465ada490379b1fe68caa9f5014188cac. Jan 20 06:39:44.177745 containerd[1591]: time="2026-01-20T06:39:44.177562790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mrsns,Uid:c6ab5b5c-f113-49c2-a111-c23dee3f1816,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1219d075ef2d5800fccaeea09f82751135a49ee413ca4b30b0a4cb55dd72cda\"" Jan 20 06:39:44.183731 kubelet[2729]: E0120 06:39:44.183416 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:44.193242 containerd[1591]: time="2026-01-20T06:39:44.193196487Z" level=info msg="CreateContainer within sandbox \"f1219d075ef2d5800fccaeea09f82751135a49ee413ca4b30b0a4cb55dd72cda\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 06:39:44.208426 containerd[1591]: time="2026-01-20T06:39:44.208307350Z" level=info msg="Container 1654a12fdd0e34ea80503a5d80289cb00a1010d1bfbbc089c87624186e45c530: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:39:44.218972 containerd[1591]: time="2026-01-20T06:39:44.218914919Z" level=info msg="CreateContainer within sandbox \"f1219d075ef2d5800fccaeea09f82751135a49ee413ca4b30b0a4cb55dd72cda\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1654a12fdd0e34ea80503a5d80289cb00a1010d1bfbbc089c87624186e45c530\"" Jan 20 06:39:44.222104 containerd[1591]: time="2026-01-20T06:39:44.222057390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-swzmf,Uid:6c6a6142-9891-483a-a948-aec1a746c626,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"1f2af6d7b2b6c213753e87968076b0f465ada490379b1fe68caa9f5014188cac\"" Jan 20 06:39:44.222938 containerd[1591]: time="2026-01-20T06:39:44.222892093Z" level=info msg="StartContainer for \"1654a12fdd0e34ea80503a5d80289cb00a1010d1bfbbc089c87624186e45c530\"" Jan 20 06:39:44.225151 kubelet[2729]: E0120 06:39:44.224964 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:44.226378 containerd[1591]: time="2026-01-20T06:39:44.226334625Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 20 06:39:44.230998 containerd[1591]: time="2026-01-20T06:39:44.230953857Z" level=info msg="connecting to shim 1654a12fdd0e34ea80503a5d80289cb00a1010d1bfbbc089c87624186e45c530" address="unix:///run/containerd/s/744503bba57cf660379248c5eef1be8b3c0a353a262b1d615d7774c9bd898919" protocol=ttrpc version=3 Jan 20 06:39:44.260170 systemd[1]: Started cri-containerd-1654a12fdd0e34ea80503a5d80289cb00a1010d1bfbbc089c87624186e45c530.scope - libcontainer container 1654a12fdd0e34ea80503a5d80289cb00a1010d1bfbbc089c87624186e45c530. Jan 20 06:39:44.335390 containerd[1591]: time="2026-01-20T06:39:44.334543335Z" level=info msg="StartContainer for \"1654a12fdd0e34ea80503a5d80289cb00a1010d1bfbbc089c87624186e45c530\" returns successfully" Jan 20 06:39:45.034401 kubelet[2729]: E0120 06:39:45.032374 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:46.007619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3704898814.mount: Deactivated successfully. Jan 20 06:39:46.054822 containerd[1591]: time="2026-01-20T06:39:46.054645666Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:46.056425 containerd[1591]: time="2026-01-20T06:39:46.056353919Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=0" Jan 20 06:39:46.056956 containerd[1591]: time="2026-01-20T06:39:46.056902776Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:46.060253 containerd[1591]: time="2026-01-20T06:39:46.060165698Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:46.061918 containerd[1591]: time="2026-01-20T06:39:46.061855254Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.835485804s" Jan 20 06:39:46.061918 containerd[1591]: time="2026-01-20T06:39:46.061906345Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 20 06:39:46.066566 containerd[1591]: time="2026-01-20T06:39:46.066481644Z" level=info msg="CreateContainer within sandbox \"1f2af6d7b2b6c213753e87968076b0f465ada490379b1fe68caa9f5014188cac\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 20 06:39:46.079727 containerd[1591]: time="2026-01-20T06:39:46.079530968Z" level=info msg="Container 999d7384edae83dea5ea0280f0e8004cc8840c4ae1c0d3f5764c5932f542c945: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:39:46.085283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1837884656.mount: Deactivated successfully. Jan 20 06:39:46.089642 containerd[1591]: time="2026-01-20T06:39:46.089556117Z" level=info msg="CreateContainer within sandbox \"1f2af6d7b2b6c213753e87968076b0f465ada490379b1fe68caa9f5014188cac\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"999d7384edae83dea5ea0280f0e8004cc8840c4ae1c0d3f5764c5932f542c945\"" Jan 20 06:39:46.092060 containerd[1591]: time="2026-01-20T06:39:46.091983907Z" level=info msg="StartContainer for \"999d7384edae83dea5ea0280f0e8004cc8840c4ae1c0d3f5764c5932f542c945\"" Jan 20 06:39:46.093611 containerd[1591]: time="2026-01-20T06:39:46.093572372Z" level=info msg="connecting to shim 999d7384edae83dea5ea0280f0e8004cc8840c4ae1c0d3f5764c5932f542c945" address="unix:///run/containerd/s/a4cc8a55b056eda753f5f0748bbcd9e94b6cda6e944e7eba4429ff0b2c7af0c1" protocol=ttrpc version=3 Jan 20 06:39:46.143073 systemd[1]: Started cri-containerd-999d7384edae83dea5ea0280f0e8004cc8840c4ae1c0d3f5764c5932f542c945.scope - libcontainer container 999d7384edae83dea5ea0280f0e8004cc8840c4ae1c0d3f5764c5932f542c945. Jan 20 06:39:46.193035 systemd[1]: cri-containerd-999d7384edae83dea5ea0280f0e8004cc8840c4ae1c0d3f5764c5932f542c945.scope: Deactivated successfully. Jan 20 06:39:46.196050 containerd[1591]: time="2026-01-20T06:39:46.195948685Z" level=info msg="StartContainer for \"999d7384edae83dea5ea0280f0e8004cc8840c4ae1c0d3f5764c5932f542c945\" returns successfully" Jan 20 06:39:46.198972 containerd[1591]: time="2026-01-20T06:39:46.198912224Z" level=info msg="received container exit event container_id:\"999d7384edae83dea5ea0280f0e8004cc8840c4ae1c0d3f5764c5932f542c945\" id:\"999d7384edae83dea5ea0280f0e8004cc8840c4ae1c0d3f5764c5932f542c945\" pid:3065 exited_at:{seconds:1768891186 nanos:197923463}" Jan 20 06:39:46.241580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-999d7384edae83dea5ea0280f0e8004cc8840c4ae1c0d3f5764c5932f542c945-rootfs.mount: Deactivated successfully. Jan 20 06:39:46.979756 kubelet[2729]: I0120 06:39:46.979198 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mrsns" podStartSLOduration=3.979177796 podStartE2EDuration="3.979177796s" podCreationTimestamp="2026-01-20 06:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:39:45.049476953 +0000 UTC m=+8.254894819" watchObservedRunningTime="2026-01-20 06:39:46.979177796 +0000 UTC m=+10.184595661" Jan 20 06:39:47.040641 kubelet[2729]: E0120 06:39:47.040567 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:47.042572 containerd[1591]: time="2026-01-20T06:39:47.042513978Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 20 06:39:49.019143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount62247836.mount: Deactivated successfully. Jan 20 06:39:49.126202 kubelet[2729]: E0120 06:39:49.126169 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:50.392689 update_engine[1566]: I20260120 06:39:50.391786 1566 update_attempter.cc:509] Updating boot flags... Jan 20 06:39:51.425728 containerd[1591]: time="2026-01-20T06:39:51.425462253Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:51.426642 containerd[1591]: time="2026-01-20T06:39:51.426602712Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=22510520" Jan 20 06:39:51.427196 containerd[1591]: time="2026-01-20T06:39:51.427160930Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:51.430900 containerd[1591]: time="2026-01-20T06:39:51.430769031Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:51.431554 containerd[1591]: time="2026-01-20T06:39:51.431514049Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.388945449s" Jan 20 06:39:51.431554 containerd[1591]: time="2026-01-20T06:39:51.431551268Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 20 06:39:51.436987 containerd[1591]: time="2026-01-20T06:39:51.436932621Z" level=info msg="CreateContainer within sandbox \"1f2af6d7b2b6c213753e87968076b0f465ada490379b1fe68caa9f5014188cac\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 06:39:51.452979 containerd[1591]: time="2026-01-20T06:39:51.452912891Z" level=info msg="Container 0a882144977b6bb7432a5216c303b8584b7a0f952f0d2cde988f259d0323ddb3: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:39:51.460085 containerd[1591]: time="2026-01-20T06:39:51.459842266Z" level=info msg="CreateContainer within sandbox \"1f2af6d7b2b6c213753e87968076b0f465ada490379b1fe68caa9f5014188cac\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0a882144977b6bb7432a5216c303b8584b7a0f952f0d2cde988f259d0323ddb3\"" Jan 20 06:39:51.460935 containerd[1591]: time="2026-01-20T06:39:51.460885846Z" level=info msg="StartContainer for \"0a882144977b6bb7432a5216c303b8584b7a0f952f0d2cde988f259d0323ddb3\"" Jan 20 06:39:51.462589 containerd[1591]: time="2026-01-20T06:39:51.462495710Z" level=info msg="connecting to shim 0a882144977b6bb7432a5216c303b8584b7a0f952f0d2cde988f259d0323ddb3" address="unix:///run/containerd/s/a4cc8a55b056eda753f5f0748bbcd9e94b6cda6e944e7eba4429ff0b2c7af0c1" protocol=ttrpc version=3 Jan 20 06:39:51.504138 systemd[1]: Started cri-containerd-0a882144977b6bb7432a5216c303b8584b7a0f952f0d2cde988f259d0323ddb3.scope - libcontainer container 0a882144977b6bb7432a5216c303b8584b7a0f952f0d2cde988f259d0323ddb3. Jan 20 06:39:51.548150 systemd[1]: cri-containerd-0a882144977b6bb7432a5216c303b8584b7a0f952f0d2cde988f259d0323ddb3.scope: Deactivated successfully. Jan 20 06:39:51.551561 containerd[1591]: time="2026-01-20T06:39:51.550821894Z" level=info msg="received container exit event container_id:\"0a882144977b6bb7432a5216c303b8584b7a0f952f0d2cde988f259d0323ddb3\" id:\"0a882144977b6bb7432a5216c303b8584b7a0f952f0d2cde988f259d0323ddb3\" pid:3154 exited_at:{seconds:1768891191 nanos:548377673}" Jan 20 06:39:51.552342 containerd[1591]: time="2026-01-20T06:39:51.552253907Z" level=info msg="StartContainer for \"0a882144977b6bb7432a5216c303b8584b7a0f952f0d2cde988f259d0323ddb3\" returns successfully" Jan 20 06:39:51.580249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a882144977b6bb7432a5216c303b8584b7a0f952f0d2cde988f259d0323ddb3-rootfs.mount: Deactivated successfully. Jan 20 06:39:51.610533 kubelet[2729]: I0120 06:39:51.610495 2729 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 06:39:51.662608 systemd[1]: Created slice kubepods-burstable-poded34b30a_8831_4947_9d9f_0e979230da6d.slice - libcontainer container kubepods-burstable-poded34b30a_8831_4947_9d9f_0e979230da6d.slice. Jan 20 06:39:51.677038 systemd[1]: Created slice kubepods-burstable-podfd9d80b2_4b14_4675_a245_ce117a79af16.slice - libcontainer container kubepods-burstable-podfd9d80b2_4b14_4675_a245_ce117a79af16.slice. Jan 20 06:39:51.756549 kubelet[2729]: I0120 06:39:51.756325 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed34b30a-8831-4947-9d9f-0e979230da6d-config-volume\") pod \"coredns-668d6bf9bc-2fzk2\" (UID: \"ed34b30a-8831-4947-9d9f-0e979230da6d\") " pod="kube-system/coredns-668d6bf9bc-2fzk2" Jan 20 06:39:51.756549 kubelet[2729]: I0120 06:39:51.756471 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mmhs\" (UniqueName: \"kubernetes.io/projected/ed34b30a-8831-4947-9d9f-0e979230da6d-kube-api-access-2mmhs\") pod \"coredns-668d6bf9bc-2fzk2\" (UID: \"ed34b30a-8831-4947-9d9f-0e979230da6d\") " pod="kube-system/coredns-668d6bf9bc-2fzk2" Jan 20 06:39:51.756850 kubelet[2729]: I0120 06:39:51.756567 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wnfc\" (UniqueName: \"kubernetes.io/projected/fd9d80b2-4b14-4675-a245-ce117a79af16-kube-api-access-2wnfc\") pod \"coredns-668d6bf9bc-7dz2f\" (UID: \"fd9d80b2-4b14-4675-a245-ce117a79af16\") " pod="kube-system/coredns-668d6bf9bc-7dz2f" Jan 20 06:39:51.756850 kubelet[2729]: I0120 06:39:51.756632 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd9d80b2-4b14-4675-a245-ce117a79af16-config-volume\") pod \"coredns-668d6bf9bc-7dz2f\" (UID: \"fd9d80b2-4b14-4675-a245-ce117a79af16\") " pod="kube-system/coredns-668d6bf9bc-7dz2f" Jan 20 06:39:51.970747 kubelet[2729]: E0120 06:39:51.969337 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:51.970938 containerd[1591]: time="2026-01-20T06:39:51.970104593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2fzk2,Uid:ed34b30a-8831-4947-9d9f-0e979230da6d,Namespace:kube-system,Attempt:0,}" Jan 20 06:39:51.983175 kubelet[2729]: E0120 06:39:51.982580 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:51.984200 containerd[1591]: time="2026-01-20T06:39:51.984155997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7dz2f,Uid:fd9d80b2-4b14-4675-a245-ce117a79af16,Namespace:kube-system,Attempt:0,}" Jan 20 06:39:52.013538 containerd[1591]: time="2026-01-20T06:39:52.013406211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2fzk2,Uid:ed34b30a-8831-4947-9d9f-0e979230da6d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3569426acb507fccf608d1c68f02eb7ac6ba8123244c7463c7aea0c4fb9e887e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 06:39:52.014233 kubelet[2729]: E0120 06:39:52.013751 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3569426acb507fccf608d1c68f02eb7ac6ba8123244c7463c7aea0c4fb9e887e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 06:39:52.014233 kubelet[2729]: E0120 06:39:52.013839 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3569426acb507fccf608d1c68f02eb7ac6ba8123244c7463c7aea0c4fb9e887e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-2fzk2" Jan 20 06:39:52.014233 kubelet[2729]: E0120 06:39:52.013864 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3569426acb507fccf608d1c68f02eb7ac6ba8123244c7463c7aea0c4fb9e887e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-2fzk2" Jan 20 06:39:52.014233 kubelet[2729]: E0120 06:39:52.013915 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2fzk2_kube-system(ed34b30a-8831-4947-9d9f-0e979230da6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2fzk2_kube-system(ed34b30a-8831-4947-9d9f-0e979230da6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3569426acb507fccf608d1c68f02eb7ac6ba8123244c7463c7aea0c4fb9e887e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-2fzk2" podUID="ed34b30a-8831-4947-9d9f-0e979230da6d" Jan 20 06:39:52.015848 containerd[1591]: time="2026-01-20T06:39:52.015778187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7dz2f,Uid:fd9d80b2-4b14-4675-a245-ce117a79af16,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e16a2a03fe3261977d6e8e5c963a649f26bdca754d9569a1149b3d6201828c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 06:39:52.016436 kubelet[2729]: E0120 06:39:52.016314 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e16a2a03fe3261977d6e8e5c963a649f26bdca754d9569a1149b3d6201828c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 06:39:52.016436 kubelet[2729]: E0120 06:39:52.016377 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e16a2a03fe3261977d6e8e5c963a649f26bdca754d9569a1149b3d6201828c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-7dz2f" Jan 20 06:39:52.016436 kubelet[2729]: E0120 06:39:52.016399 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86e16a2a03fe3261977d6e8e5c963a649f26bdca754d9569a1149b3d6201828c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-7dz2f" Jan 20 06:39:52.016662 kubelet[2729]: E0120 06:39:52.016626 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7dz2f_kube-system(fd9d80b2-4b14-4675-a245-ce117a79af16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7dz2f_kube-system(fd9d80b2-4b14-4675-a245-ce117a79af16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86e16a2a03fe3261977d6e8e5c963a649f26bdca754d9569a1149b3d6201828c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-7dz2f" podUID="fd9d80b2-4b14-4675-a245-ce117a79af16" Jan 20 06:39:52.056211 kubelet[2729]: E0120 06:39:52.056148 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:52.060358 containerd[1591]: time="2026-01-20T06:39:52.060072129Z" level=info msg="CreateContainer within sandbox \"1f2af6d7b2b6c213753e87968076b0f465ada490379b1fe68caa9f5014188cac\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 20 06:39:52.071098 containerd[1591]: time="2026-01-20T06:39:52.070869061Z" level=info msg="Container d65b6ec65126d7c1c84ebd8ea3ed259f68ccde04c7a739ff7bcd8f084a7b39fa: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:39:52.082294 containerd[1591]: time="2026-01-20T06:39:52.082252199Z" level=info msg="CreateContainer within sandbox \"1f2af6d7b2b6c213753e87968076b0f465ada490379b1fe68caa9f5014188cac\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"d65b6ec65126d7c1c84ebd8ea3ed259f68ccde04c7a739ff7bcd8f084a7b39fa\"" Jan 20 06:39:52.084368 containerd[1591]: time="2026-01-20T06:39:52.084250766Z" level=info msg="StartContainer for \"d65b6ec65126d7c1c84ebd8ea3ed259f68ccde04c7a739ff7bcd8f084a7b39fa\"" Jan 20 06:39:52.087158 containerd[1591]: time="2026-01-20T06:39:52.086286038Z" level=info msg="connecting to shim d65b6ec65126d7c1c84ebd8ea3ed259f68ccde04c7a739ff7bcd8f084a7b39fa" address="unix:///run/containerd/s/a4cc8a55b056eda753f5f0748bbcd9e94b6cda6e944e7eba4429ff0b2c7af0c1" protocol=ttrpc version=3 Jan 20 06:39:52.116136 systemd[1]: Started cri-containerd-d65b6ec65126d7c1c84ebd8ea3ed259f68ccde04c7a739ff7bcd8f084a7b39fa.scope - libcontainer container d65b6ec65126d7c1c84ebd8ea3ed259f68ccde04c7a739ff7bcd8f084a7b39fa. Jan 20 06:39:52.164323 containerd[1591]: time="2026-01-20T06:39:52.164271142Z" level=info msg="StartContainer for \"d65b6ec65126d7c1c84ebd8ea3ed259f68ccde04c7a739ff7bcd8f084a7b39fa\" returns successfully" Jan 20 06:39:53.060758 kubelet[2729]: E0120 06:39:53.058909 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:53.082140 kubelet[2729]: I0120 06:39:53.081726 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-swzmf" podStartSLOduration=2.874024163 podStartE2EDuration="10.08168873s" podCreationTimestamp="2026-01-20 06:39:43 +0000 UTC" firstStartedPulling="2026-01-20 06:39:44.225877642 +0000 UTC m=+7.431295487" lastFinishedPulling="2026-01-20 06:39:51.43354219 +0000 UTC m=+14.638960054" observedRunningTime="2026-01-20 06:39:53.081187094 +0000 UTC m=+16.286604955" watchObservedRunningTime="2026-01-20 06:39:53.08168873 +0000 UTC m=+16.287106597" Jan 20 06:39:53.237338 systemd-networkd[1494]: flannel.1: Link UP Jan 20 06:39:53.237349 systemd-networkd[1494]: flannel.1: Gained carrier Jan 20 06:39:54.063854 kubelet[2729]: E0120 06:39:54.063803 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:39:55.011107 systemd-networkd[1494]: flannel.1: Gained IPv6LL Jan 20 06:40:02.964889 kubelet[2729]: E0120 06:40:02.962768 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:40:02.965728 containerd[1591]: time="2026-01-20T06:40:02.963835623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7dz2f,Uid:fd9d80b2-4b14-4675-a245-ce117a79af16,Namespace:kube-system,Attempt:0,}" Jan 20 06:40:03.005725 systemd-networkd[1494]: cni0: Link UP Jan 20 06:40:03.005738 systemd-networkd[1494]: cni0: Gained carrier Jan 20 06:40:03.019601 kernel: cni0: port 1(vethabdc772a) entered blocking state Jan 20 06:40:03.020135 kernel: cni0: port 1(vethabdc772a) entered disabled state Jan 20 06:40:03.020628 kernel: vethabdc772a: entered allmulticast mode Jan 20 06:40:03.029757 kernel: vethabdc772a: entered promiscuous mode Jan 20 06:40:03.032217 systemd-networkd[1494]: vethabdc772a: Link UP Jan 20 06:40:03.038571 systemd-networkd[1494]: cni0: Lost carrier Jan 20 06:40:03.043677 kernel: cni0: port 1(vethabdc772a) entered blocking state Jan 20 06:40:03.043844 kernel: cni0: port 1(vethabdc772a) entered forwarding state Jan 20 06:40:03.045366 systemd-networkd[1494]: vethabdc772a: Gained carrier Jan 20 06:40:03.049540 systemd-networkd[1494]: cni0: Gained carrier Jan 20 06:40:03.066529 containerd[1591]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 20 06:40:03.066529 containerd[1591]: delegateAdd: netconf sent to delegate plugin: Jan 20 06:40:03.142580 containerd[1591]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T06:40:03.142512789Z" level=info msg="connecting to shim 50d26e51e252ec8c04850d59bbfcb08b815f956eb114f4af1a243470263e1a50" address="unix:///run/containerd/s/cc0e75bce2124e8b379e65377b6497f154a053f0aee5ec50de80ad7dc97d3036" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:40:03.190571 systemd[1]: Started cri-containerd-50d26e51e252ec8c04850d59bbfcb08b815f956eb114f4af1a243470263e1a50.scope - libcontainer container 50d26e51e252ec8c04850d59bbfcb08b815f956eb114f4af1a243470263e1a50. Jan 20 06:40:03.273424 containerd[1591]: time="2026-01-20T06:40:03.273352416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7dz2f,Uid:fd9d80b2-4b14-4675-a245-ce117a79af16,Namespace:kube-system,Attempt:0,} returns sandbox id \"50d26e51e252ec8c04850d59bbfcb08b815f956eb114f4af1a243470263e1a50\"" Jan 20 06:40:03.278381 kubelet[2729]: E0120 06:40:03.278272 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:40:03.285384 containerd[1591]: time="2026-01-20T06:40:03.284792282Z" level=info msg="CreateContainer within sandbox \"50d26e51e252ec8c04850d59bbfcb08b815f956eb114f4af1a243470263e1a50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 06:40:03.305198 containerd[1591]: time="2026-01-20T06:40:03.305136601Z" level=info msg="Container 726e3ecb3e44958e7fa788909056ecb33b6eb9328aad95e80a37d37587d9ee3a: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:40:03.308119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3323340987.mount: Deactivated successfully. Jan 20 06:40:03.317618 containerd[1591]: time="2026-01-20T06:40:03.317530383Z" level=info msg="CreateContainer within sandbox \"50d26e51e252ec8c04850d59bbfcb08b815f956eb114f4af1a243470263e1a50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"726e3ecb3e44958e7fa788909056ecb33b6eb9328aad95e80a37d37587d9ee3a\"" Jan 20 06:40:03.321793 containerd[1591]: time="2026-01-20T06:40:03.321721918Z" level=info msg="StartContainer for \"726e3ecb3e44958e7fa788909056ecb33b6eb9328aad95e80a37d37587d9ee3a\"" Jan 20 06:40:03.323895 containerd[1591]: time="2026-01-20T06:40:03.323785783Z" level=info msg="connecting to shim 726e3ecb3e44958e7fa788909056ecb33b6eb9328aad95e80a37d37587d9ee3a" address="unix:///run/containerd/s/cc0e75bce2124e8b379e65377b6497f154a053f0aee5ec50de80ad7dc97d3036" protocol=ttrpc version=3 Jan 20 06:40:03.363232 systemd[1]: Started cri-containerd-726e3ecb3e44958e7fa788909056ecb33b6eb9328aad95e80a37d37587d9ee3a.scope - libcontainer container 726e3ecb3e44958e7fa788909056ecb33b6eb9328aad95e80a37d37587d9ee3a. Jan 20 06:40:03.432036 containerd[1591]: time="2026-01-20T06:40:03.431973204Z" level=info msg="StartContainer for \"726e3ecb3e44958e7fa788909056ecb33b6eb9328aad95e80a37d37587d9ee3a\" returns successfully" Jan 20 06:40:04.102452 kubelet[2729]: E0120 06:40:04.101800 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:40:04.150189 kubelet[2729]: I0120 06:40:04.150005 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7dz2f" podStartSLOduration=21.149866246 podStartE2EDuration="21.149866246s" podCreationTimestamp="2026-01-20 06:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:40:04.121497436 +0000 UTC m=+27.326915304" watchObservedRunningTime="2026-01-20 06:40:04.149866246 +0000 UTC m=+27.355284116" Jan 20 06:40:04.802967 systemd-networkd[1494]: cni0: Gained IPv6LL Jan 20 06:40:04.866965 systemd-networkd[1494]: vethabdc772a: Gained IPv6LL Jan 20 06:40:05.103835 kubelet[2729]: E0120 06:40:05.103399 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:40:06.105986 kubelet[2729]: E0120 06:40:06.105881 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:40:06.966733 kubelet[2729]: E0120 06:40:06.965868 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:40:06.968402 containerd[1591]: time="2026-01-20T06:40:06.968346718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2fzk2,Uid:ed34b30a-8831-4947-9d9f-0e979230da6d,Namespace:kube-system,Attempt:0,}" Jan 20 06:40:06.993892 systemd-networkd[1494]: veth881bfed8: Link UP Jan 20 06:40:06.996747 kernel: cni0: port 2(veth881bfed8) entered blocking state Jan 20 06:40:06.996872 kernel: cni0: port 2(veth881bfed8) entered disabled state Jan 20 06:40:07.000309 kernel: veth881bfed8: entered allmulticast mode Jan 20 06:40:07.000457 kernel: veth881bfed8: entered promiscuous mode Jan 20 06:40:07.009454 kernel: cni0: port 2(veth881bfed8) entered blocking state Jan 20 06:40:07.009608 kernel: cni0: port 2(veth881bfed8) entered forwarding state Jan 20 06:40:07.009535 systemd-networkd[1494]: veth881bfed8: Gained carrier Jan 20 06:40:07.017352 containerd[1591]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Jan 20 06:40:07.017352 containerd[1591]: delegateAdd: netconf sent to delegate plugin: Jan 20 06:40:07.049593 containerd[1591]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T06:40:07.049452873Z" level=info msg="connecting to shim 6c5f4adb57502a784192a726bb4b4eef518f5ca17643b284ebdfb77423d872b6" address="unix:///run/containerd/s/3598f627e46f624b6bf3bf55c1b0936beb4ac89923594cb899ab7c05971d843b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:40:07.100059 systemd[1]: Started cri-containerd-6c5f4adb57502a784192a726bb4b4eef518f5ca17643b284ebdfb77423d872b6.scope - libcontainer container 6c5f4adb57502a784192a726bb4b4eef518f5ca17643b284ebdfb77423d872b6. Jan 20 06:40:07.173793 containerd[1591]: time="2026-01-20T06:40:07.173747898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2fzk2,Uid:ed34b30a-8831-4947-9d9f-0e979230da6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c5f4adb57502a784192a726bb4b4eef518f5ca17643b284ebdfb77423d872b6\"" Jan 20 06:40:07.174887 kubelet[2729]: E0120 06:40:07.174855 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:40:07.180168 containerd[1591]: time="2026-01-20T06:40:07.179915462Z" level=info msg="CreateContainer within sandbox \"6c5f4adb57502a784192a726bb4b4eef518f5ca17643b284ebdfb77423d872b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 06:40:07.193729 containerd[1591]: time="2026-01-20T06:40:07.191944755Z" level=info msg="Container 1f6ee4d7985f228f246a397b24525e6cfa68af536e5ccf5f0f591531c2beb49c: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:40:07.195027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3670321840.mount: Deactivated successfully. Jan 20 06:40:07.203344 containerd[1591]: time="2026-01-20T06:40:07.203213928Z" level=info msg="CreateContainer within sandbox \"6c5f4adb57502a784192a726bb4b4eef518f5ca17643b284ebdfb77423d872b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f6ee4d7985f228f246a397b24525e6cfa68af536e5ccf5f0f591531c2beb49c\"" Jan 20 06:40:07.203919 containerd[1591]: time="2026-01-20T06:40:07.203892149Z" level=info msg="StartContainer for \"1f6ee4d7985f228f246a397b24525e6cfa68af536e5ccf5f0f591531c2beb49c\"" Jan 20 06:40:07.205120 containerd[1591]: time="2026-01-20T06:40:07.205054818Z" level=info msg="connecting to shim 1f6ee4d7985f228f246a397b24525e6cfa68af536e5ccf5f0f591531c2beb49c" address="unix:///run/containerd/s/3598f627e46f624b6bf3bf55c1b0936beb4ac89923594cb899ab7c05971d843b" protocol=ttrpc version=3 Jan 20 06:40:07.233024 systemd[1]: Started cri-containerd-1f6ee4d7985f228f246a397b24525e6cfa68af536e5ccf5f0f591531c2beb49c.scope - libcontainer container 1f6ee4d7985f228f246a397b24525e6cfa68af536e5ccf5f0f591531c2beb49c. Jan 20 06:40:07.272117 containerd[1591]: time="2026-01-20T06:40:07.272004723Z" level=info msg="StartContainer for \"1f6ee4d7985f228f246a397b24525e6cfa68af536e5ccf5f0f591531c2beb49c\" returns successfully" Jan 20 06:40:08.115343 kubelet[2729]: E0120 06:40:08.114591 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:40:08.130933 systemd-networkd[1494]: veth881bfed8: Gained IPv6LL Jan 20 06:40:08.171191 kubelet[2729]: I0120 06:40:08.171076 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2fzk2" podStartSLOduration=25.171036901 podStartE2EDuration="25.171036901s" podCreationTimestamp="2026-01-20 06:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:40:08.156424721 +0000 UTC m=+31.361842621" watchObservedRunningTime="2026-01-20 06:40:08.171036901 +0000 UTC m=+31.376454767" Jan 20 06:40:09.117283 kubelet[2729]: E0120 06:40:09.117149 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:40:10.119204 kubelet[2729]: E0120 06:40:10.119154 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:40:23.316622 systemd[1]: Started sshd@5-164.92.87.233:22-20.161.92.111:53504.service - OpenSSH per-connection server daemon (20.161.92.111:53504). Jan 20 06:40:23.720156 sshd[3692]: Accepted publickey for core from 20.161.92.111 port 53504 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:40:23.722233 sshd-session[3692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:23.735501 systemd-logind[1565]: New session 7 of user core. Jan 20 06:40:23.741309 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 06:40:24.003219 sshd[3717]: Connection closed by 20.161.92.111 port 53504 Jan 20 06:40:24.003680 sshd-session[3692]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:24.010835 systemd[1]: sshd@5-164.92.87.233:22-20.161.92.111:53504.service: Deactivated successfully. Jan 20 06:40:24.014607 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 06:40:24.018504 systemd-logind[1565]: Session 7 logged out. Waiting for processes to exit. Jan 20 06:40:24.021407 systemd-logind[1565]: Removed session 7. Jan 20 06:40:29.077259 systemd[1]: Started sshd@6-164.92.87.233:22-20.161.92.111:53510.service - OpenSSH per-connection server daemon (20.161.92.111:53510). Jan 20 06:40:29.439557 sshd[3751]: Accepted publickey for core from 20.161.92.111 port 53510 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:40:29.442065 sshd-session[3751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:29.450850 systemd-logind[1565]: New session 8 of user core. Jan 20 06:40:29.459076 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 06:40:29.706345 sshd[3755]: Connection closed by 20.161.92.111 port 53510 Jan 20 06:40:29.707161 sshd-session[3751]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:29.715413 systemd[1]: sshd@6-164.92.87.233:22-20.161.92.111:53510.service: Deactivated successfully. Jan 20 06:40:29.719397 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 06:40:29.722526 systemd-logind[1565]: Session 8 logged out. Waiting for processes to exit. Jan 20 06:40:29.723605 systemd-logind[1565]: Removed session 8. Jan 20 06:40:34.790586 systemd[1]: Started sshd@7-164.92.87.233:22-20.161.92.111:49518.service - OpenSSH per-connection server daemon (20.161.92.111:49518). Jan 20 06:40:35.161101 sshd[3789]: Accepted publickey for core from 20.161.92.111 port 49518 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:40:35.162962 sshd-session[3789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:35.171093 systemd-logind[1565]: New session 9 of user core. Jan 20 06:40:35.177105 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 06:40:35.420956 sshd[3793]: Connection closed by 20.161.92.111 port 49518 Jan 20 06:40:35.422749 sshd-session[3789]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:35.429433 systemd[1]: sshd@7-164.92.87.233:22-20.161.92.111:49518.service: Deactivated successfully. Jan 20 06:40:35.433603 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 06:40:35.437844 systemd-logind[1565]: Session 9 logged out. Waiting for processes to exit. Jan 20 06:40:35.442393 systemd-logind[1565]: Removed session 9. Jan 20 06:40:35.510641 systemd[1]: Started sshd@8-164.92.87.233:22-20.161.92.111:49530.service - OpenSSH per-connection server daemon (20.161.92.111:49530). Jan 20 06:40:35.907682 sshd[3805]: Accepted publickey for core from 20.161.92.111 port 49530 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:40:35.910534 sshd-session[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:35.918069 systemd-logind[1565]: New session 10 of user core. Jan 20 06:40:35.929207 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 06:40:36.233098 sshd[3809]: Connection closed by 20.161.92.111 port 49530 Jan 20 06:40:36.233938 sshd-session[3805]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:36.239969 systemd[1]: sshd@8-164.92.87.233:22-20.161.92.111:49530.service: Deactivated successfully. Jan 20 06:40:36.243897 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 06:40:36.245942 systemd-logind[1565]: Session 10 logged out. Waiting for processes to exit. Jan 20 06:40:36.249104 systemd-logind[1565]: Removed session 10. Jan 20 06:40:36.307732 systemd[1]: Started sshd@9-164.92.87.233:22-20.161.92.111:49544.service - OpenSSH per-connection server daemon (20.161.92.111:49544). Jan 20 06:40:36.667965 sshd[3819]: Accepted publickey for core from 20.161.92.111 port 49544 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:40:36.670023 sshd-session[3819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:36.676103 systemd-logind[1565]: New session 11 of user core. Jan 20 06:40:36.685311 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 06:40:36.941213 sshd[3823]: Connection closed by 20.161.92.111 port 49544 Jan 20 06:40:36.941879 sshd-session[3819]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:36.949102 systemd[1]: sshd@9-164.92.87.233:22-20.161.92.111:49544.service: Deactivated successfully. Jan 20 06:40:36.951660 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 06:40:36.953333 systemd-logind[1565]: Session 11 logged out. Waiting for processes to exit. Jan 20 06:40:36.955361 systemd-logind[1565]: Removed session 11. Jan 20 06:40:42.016832 systemd[1]: Started sshd@10-164.92.87.233:22-20.161.92.111:49548.service - OpenSSH per-connection server daemon (20.161.92.111:49548). Jan 20 06:40:42.380456 sshd[3858]: Accepted publickey for core from 20.161.92.111 port 49548 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:40:42.382788 sshd-session[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:42.389773 systemd-logind[1565]: New session 12 of user core. Jan 20 06:40:42.397157 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 06:40:42.642423 sshd[3862]: Connection closed by 20.161.92.111 port 49548 Jan 20 06:40:42.642187 sshd-session[3858]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:42.649073 systemd[1]: sshd@10-164.92.87.233:22-20.161.92.111:49548.service: Deactivated successfully. Jan 20 06:40:42.651966 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 06:40:42.653843 systemd-logind[1565]: Session 12 logged out. Waiting for processes to exit. Jan 20 06:40:42.657986 systemd-logind[1565]: Removed session 12. Jan 20 06:40:42.720844 systemd[1]: Started sshd@11-164.92.87.233:22-20.161.92.111:49128.service - OpenSSH per-connection server daemon (20.161.92.111:49128). Jan 20 06:40:43.115837 sshd[3874]: Accepted publickey for core from 20.161.92.111 port 49128 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:40:43.118764 sshd-session[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:43.125988 systemd-logind[1565]: New session 13 of user core. Jan 20 06:40:43.133137 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 06:40:43.694540 sshd[3878]: Connection closed by 20.161.92.111 port 49128 Jan 20 06:40:43.697020 sshd-session[3874]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:43.704497 systemd[1]: sshd@11-164.92.87.233:22-20.161.92.111:49128.service: Deactivated successfully. Jan 20 06:40:43.707751 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 06:40:43.710785 systemd-logind[1565]: Session 13 logged out. Waiting for processes to exit. Jan 20 06:40:43.712339 systemd-logind[1565]: Removed session 13. Jan 20 06:40:43.789011 systemd[1]: Started sshd@12-164.92.87.233:22-20.161.92.111:49140.service - OpenSSH per-connection server daemon (20.161.92.111:49140). Jan 20 06:40:44.206289 sshd[3909]: Accepted publickey for core from 20.161.92.111 port 49140 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:40:44.210009 sshd-session[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:44.216055 systemd-logind[1565]: New session 14 of user core. Jan 20 06:40:44.226071 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 06:40:45.207273 sshd[3913]: Connection closed by 20.161.92.111 port 49140 Jan 20 06:40:45.206978 sshd-session[3909]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:45.215173 systemd[1]: sshd@12-164.92.87.233:22-20.161.92.111:49140.service: Deactivated successfully. Jan 20 06:40:45.220449 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 06:40:45.226102 systemd-logind[1565]: Session 14 logged out. Waiting for processes to exit. Jan 20 06:40:45.229865 systemd-logind[1565]: Removed session 14. Jan 20 06:40:45.290149 systemd[1]: Started sshd@13-164.92.87.233:22-20.161.92.111:49156.service - OpenSSH per-connection server daemon (20.161.92.111:49156). Jan 20 06:40:45.671378 sshd[3932]: Accepted publickey for core from 20.161.92.111 port 49156 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:40:45.675283 sshd-session[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:45.688642 systemd-logind[1565]: New session 15 of user core. Jan 20 06:40:45.706273 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 06:40:46.142180 sshd[3936]: Connection closed by 20.161.92.111 port 49156 Jan 20 06:40:46.143947 sshd-session[3932]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:46.151650 systemd-logind[1565]: Session 15 logged out. Waiting for processes to exit. Jan 20 06:40:46.153202 systemd[1]: sshd@13-164.92.87.233:22-20.161.92.111:49156.service: Deactivated successfully. Jan 20 06:40:46.157274 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 06:40:46.161187 systemd-logind[1565]: Removed session 15. Jan 20 06:40:46.224006 systemd[1]: Started sshd@14-164.92.87.233:22-20.161.92.111:49158.service - OpenSSH per-connection server daemon (20.161.92.111:49158). Jan 20 06:40:46.626182 sshd[3945]: Accepted publickey for core from 20.161.92.111 port 49158 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:40:46.628614 sshd-session[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:46.639630 systemd-logind[1565]: New session 16 of user core. Jan 20 06:40:46.646113 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 06:40:46.916601 sshd[3949]: Connection closed by 20.161.92.111 port 49158 Jan 20 06:40:46.917293 sshd-session[3945]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:46.923339 systemd[1]: sshd@14-164.92.87.233:22-20.161.92.111:49158.service: Deactivated successfully. Jan 20 06:40:46.926428 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 06:40:46.929415 systemd-logind[1565]: Session 16 logged out. Waiting for processes to exit. Jan 20 06:40:46.932156 systemd-logind[1565]: Removed session 16. Jan 20 06:40:51.991424 systemd[1]: Started sshd@15-164.92.87.233:22-20.161.92.111:49168.service - OpenSSH per-connection server daemon (20.161.92.111:49168). Jan 20 06:40:52.363749 sshd[3984]: Accepted publickey for core from 20.161.92.111 port 49168 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:40:52.365248 sshd-session[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:52.372440 systemd-logind[1565]: New session 17 of user core. Jan 20 06:40:52.378021 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 06:40:52.631439 sshd[3988]: Connection closed by 20.161.92.111 port 49168 Jan 20 06:40:52.632261 sshd-session[3984]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:52.639220 systemd[1]: sshd@15-164.92.87.233:22-20.161.92.111:49168.service: Deactivated successfully. Jan 20 06:40:52.642506 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 06:40:52.644834 systemd-logind[1565]: Session 17 logged out. Waiting for processes to exit. Jan 20 06:40:52.646352 systemd-logind[1565]: Removed session 17. Jan 20 06:40:57.706109 systemd[1]: Started sshd@16-164.92.87.233:22-20.161.92.111:42498.service - OpenSSH per-connection server daemon (20.161.92.111:42498). Jan 20 06:40:58.063725 sshd[4021]: Accepted publickey for core from 20.161.92.111 port 42498 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:40:58.065890 sshd-session[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:58.073101 systemd-logind[1565]: New session 18 of user core. Jan 20 06:40:58.079012 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 06:40:58.332771 sshd[4025]: Connection closed by 20.161.92.111 port 42498 Jan 20 06:40:58.333474 sshd-session[4021]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:58.340475 systemd[1]: sshd@16-164.92.87.233:22-20.161.92.111:42498.service: Deactivated successfully. Jan 20 06:40:58.345273 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 06:40:58.348085 systemd-logind[1565]: Session 18 logged out. Waiting for processes to exit. Jan 20 06:40:58.349569 systemd-logind[1565]: Removed session 18. Jan 20 06:40:58.962722 kubelet[2729]: E0120 06:40:58.962124 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:41:00.961932 kubelet[2729]: E0120 06:41:00.961854 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:41:02.963756 kubelet[2729]: E0120 06:41:02.962050 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 20 06:41:03.409897 systemd[1]: Started sshd@17-164.92.87.233:22-20.161.92.111:44646.service - OpenSSH per-connection server daemon (20.161.92.111:44646). Jan 20 06:41:03.771833 sshd[4059]: Accepted publickey for core from 20.161.92.111 port 44646 ssh2: RSA SHA256:okcbLryKYXXxHtmjKLtyw7T53+VMFeEPQzF5v7swOiM Jan 20 06:41:03.774631 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:41:03.786808 systemd-logind[1565]: New session 19 of user core. Jan 20 06:41:03.791379 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 06:41:04.038271 sshd[4080]: Connection closed by 20.161.92.111 port 44646 Jan 20 06:41:04.039915 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Jan 20 06:41:04.046938 systemd[1]: sshd@17-164.92.87.233:22-20.161.92.111:44646.service: Deactivated successfully. Jan 20 06:41:04.051834 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 06:41:04.053287 systemd-logind[1565]: Session 19 logged out. Waiting for processes to exit. Jan 20 06:41:04.055615 systemd-logind[1565]: Removed session 19.