Jan 30 05:01:14.017791 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 05:01:14.017834 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:01:14.017858 kernel: BIOS-provided physical RAM map: Jan 30 05:01:14.017874 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 05:01:14.017888 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 05:01:14.017903 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 05:01:14.017923 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 05:01:14.017939 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 05:01:14.017955 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 05:01:14.017974 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 05:01:14.017991 kernel: NX (Execute Disable) protection: active Jan 30 05:01:14.018007 kernel: APIC: Static calls initialized Jan 30 05:01:14.018027 kernel: SMBIOS 2.8 present. Jan 30 05:01:14.018044 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 05:01:14.018065 kernel: Hypervisor detected: KVM Jan 30 05:01:14.018087 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 05:01:14.018108 kernel: kvm-clock: using sched offset of 4406799529 cycles Jan 30 05:01:14.018127 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 05:01:14.018145 kernel: tsc: Detected 2294.608 MHz processor Jan 30 05:01:14.018164 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 05:01:14.018184 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 05:01:14.018203 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 05:01:14.018221 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 05:01:14.018239 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 05:01:14.018263 kernel: ACPI: Early table checksum verification disabled Jan 30 05:01:14.018281 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 05:01:14.018302 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:14.018321 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:14.018339 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:14.018359 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 05:01:14.018383 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:14.018401 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:14.018420 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:14.018443 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:14.018467 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 05:01:14.018489 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 05:01:14.018510 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 05:01:14.019147 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 05:01:14.019176 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 05:01:14.019197 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 05:01:14.019230 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 05:01:14.019250 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 05:01:14.019269 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 05:01:14.019290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 05:01:14.019309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 05:01:14.019335 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 05:01:14.019355 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 05:01:14.019378 kernel: Zone ranges: Jan 30 05:01:14.019398 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 05:01:14.020804 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 05:01:14.020826 kernel: Normal empty Jan 30 05:01:14.020846 kernel: Movable zone start for each node Jan 30 05:01:14.020867 kernel: Early memory node ranges Jan 30 05:01:14.020889 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 05:01:14.020904 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 05:01:14.020917 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 05:01:14.020946 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 05:01:14.020966 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 05:01:14.020993 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 05:01:14.021009 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 05:01:14.021031 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 05:01:14.021051 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 05:01:14.021071 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 05:01:14.021092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 05:01:14.021114 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 05:01:14.021143 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 05:01:14.021164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 05:01:14.021185 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 05:01:14.021211 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 05:01:14.021235 kernel: TSC deadline timer available Jan 30 05:01:14.021256 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 05:01:14.021274 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 05:01:14.021300 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 05:01:14.021329 kernel: Booting paravirtualized kernel on KVM Jan 30 05:01:14.021345 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 05:01:14.021374 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 05:01:14.021394 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 05:01:14.021419 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 05:01:14.021433 kernel: pcpu-alloc: [0] 0 1 Jan 30 05:01:14.021445 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 05:01:14.021461 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:01:14.021480 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 05:01:14.021507 kernel: random: crng init done Jan 30 05:01:14.021532 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 05:01:14.021559 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 05:01:14.021584 kernel: Fallback order for Node 0: 0 Jan 30 05:01:14.021606 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 05:01:14.021626 kernel: Policy zone: DMA32 Jan 30 05:01:14.021646 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 05:01:14.021666 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 30 05:01:14.021685 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 05:01:14.021709 kernel: Kernel/User page tables isolation: enabled Jan 30 05:01:14.021729 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 05:01:14.022781 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 05:01:14.022803 kernel: Dynamic Preempt: voluntary Jan 30 05:01:14.022823 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 05:01:14.022844 kernel: rcu: RCU event tracing is enabled. Jan 30 05:01:14.022864 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 05:01:14.022884 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 05:01:14.022903 kernel: Rude variant of Tasks RCU enabled. Jan 30 05:01:14.022923 kernel: Tracing variant of Tasks RCU enabled. Jan 30 05:01:14.022948 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 05:01:14.022968 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 05:01:14.022987 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 05:01:14.023007 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 05:01:14.023032 kernel: Console: colour VGA+ 80x25 Jan 30 05:01:14.023052 kernel: printk: console [tty0] enabled Jan 30 05:01:14.023071 kernel: printk: console [ttyS0] enabled Jan 30 05:01:14.023091 kernel: ACPI: Core revision 20230628 Jan 30 05:01:14.023111 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 05:01:14.023136 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 05:01:14.023153 kernel: x2apic enabled Jan 30 05:01:14.023165 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 05:01:14.023182 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 05:01:14.023205 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 30 05:01:14.023230 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 30 05:01:14.023250 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 05:01:14.023270 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 05:01:14.023307 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 05:01:14.023328 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 05:01:14.023349 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 05:01:14.023373 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 05:01:14.023394 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 05:01:14.023415 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 05:01:14.023437 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 05:01:14.023458 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 05:01:14.023479 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 05:01:14.023514 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 05:01:14.023557 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 05:01:14.023577 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 05:01:14.023592 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 05:01:14.023605 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 05:01:14.023624 kernel: Freeing SMP alternatives memory: 32K Jan 30 05:01:14.023666 kernel: pid_max: default: 32768 minimum: 301 Jan 30 05:01:14.023699 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 05:01:14.023726 kernel: landlock: Up and running. Jan 30 05:01:14.024875 kernel: SELinux: Initializing. Jan 30 05:01:14.024902 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:01:14.024921 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:01:14.024947 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 05:01:14.024969 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:01:14.024990 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:01:14.025014 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:01:14.025035 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 05:01:14.025067 kernel: signal: max sigframe size: 1776 Jan 30 05:01:14.025089 kernel: rcu: Hierarchical SRCU implementation. Jan 30 05:01:14.025112 kernel: rcu: Max phase no-delay instances is 400. Jan 30 05:01:14.025141 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 05:01:14.025165 kernel: smp: Bringing up secondary CPUs ... Jan 30 05:01:14.025189 kernel: smpboot: x86: Booting SMP configuration: Jan 30 05:01:14.025214 kernel: .... node #0, CPUs: #1 Jan 30 05:01:14.025229 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 05:01:14.025251 kernel: smpboot: Max logical packages: 1 Jan 30 05:01:14.025271 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 30 05:01:14.025285 kernel: devtmpfs: initialized Jan 30 05:01:14.025299 kernel: x86/mm: Memory block size: 128MB Jan 30 05:01:14.025313 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 05:01:14.025335 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 05:01:14.025357 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 05:01:14.025385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 05:01:14.025413 kernel: audit: initializing netlink subsys (disabled) Jan 30 05:01:14.025441 kernel: audit: type=2000 audit(1738213272.878:1): state=initialized audit_enabled=0 res=1 Jan 30 05:01:14.025466 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 05:01:14.025487 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 05:01:14.025508 kernel: cpuidle: using governor menu Jan 30 05:01:14.025529 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 05:01:14.025551 kernel: dca service started, version 1.12.1 Jan 30 05:01:14.025572 kernel: PCI: Using configuration type 1 for base access Jan 30 05:01:14.025593 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 05:01:14.025614 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 05:01:14.025635 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 05:01:14.025660 kernel: ACPI: Added _OSI(Module Device) Jan 30 05:01:14.025681 kernel: ACPI: Added _OSI(Processor Device) Jan 30 05:01:14.025702 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 05:01:14.025723 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 05:01:14.025755 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 05:01:14.025795 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 05:01:14.025816 kernel: ACPI: Interpreter enabled Jan 30 05:01:14.025837 kernel: ACPI: PM: (supports S0 S5) Jan 30 05:01:14.025858 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 05:01:14.025884 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 05:01:14.025905 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 05:01:14.025926 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 05:01:14.025947 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 05:01:14.026277 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 05:01:14.026402 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 05:01:14.026504 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 05:01:14.026522 kernel: acpiphp: Slot [3] registered Jan 30 05:01:14.026532 kernel: acpiphp: Slot [4] registered Jan 30 05:01:14.026541 kernel: acpiphp: Slot [5] registered Jan 30 05:01:14.026551 kernel: acpiphp: Slot [6] registered Jan 30 05:01:14.026560 kernel: acpiphp: Slot [7] registered Jan 30 05:01:14.026570 kernel: acpiphp: Slot [8] registered Jan 30 05:01:14.026579 kernel: acpiphp: Slot [9] registered Jan 30 05:01:14.026588 kernel: acpiphp: Slot [10] registered Jan 30 05:01:14.026598 kernel: acpiphp: Slot [11] registered Jan 30 05:01:14.026610 kernel: acpiphp: Slot [12] registered Jan 30 05:01:14.026620 kernel: acpiphp: Slot [13] registered Jan 30 05:01:14.026629 kernel: acpiphp: Slot [14] registered Jan 30 05:01:14.026639 kernel: acpiphp: Slot [15] registered Jan 30 05:01:14.026648 kernel: acpiphp: Slot [16] registered Jan 30 05:01:14.026658 kernel: acpiphp: Slot [17] registered Jan 30 05:01:14.026667 kernel: acpiphp: Slot [18] registered Jan 30 05:01:14.026676 kernel: acpiphp: Slot [19] registered Jan 30 05:01:14.026686 kernel: acpiphp: Slot [20] registered Jan 30 05:01:14.026695 kernel: acpiphp: Slot [21] registered Jan 30 05:01:14.026707 kernel: acpiphp: Slot [22] registered Jan 30 05:01:14.026717 kernel: acpiphp: Slot [23] registered Jan 30 05:01:14.026726 kernel: acpiphp: Slot [24] registered Jan 30 05:01:14.027762 kernel: acpiphp: Slot [25] registered Jan 30 05:01:14.028792 kernel: acpiphp: Slot [26] registered Jan 30 05:01:14.028823 kernel: acpiphp: Slot [27] registered Jan 30 05:01:14.028837 kernel: acpiphp: Slot [28] registered Jan 30 05:01:14.028852 kernel: acpiphp: Slot [29] registered Jan 30 05:01:14.028867 kernel: acpiphp: Slot [30] registered Jan 30 05:01:14.028892 kernel: acpiphp: Slot [31] registered Jan 30 05:01:14.028906 kernel: PCI host bridge to bus 0000:00 Jan 30 05:01:14.029128 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 05:01:14.029286 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 05:01:14.029446 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 05:01:14.029576 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 05:01:14.029705 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 05:01:14.029877 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 05:01:14.030084 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 05:01:14.030263 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 05:01:14.030433 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 05:01:14.030604 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 05:01:14.033900 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 05:01:14.034149 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 05:01:14.034344 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 05:01:14.034492 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 05:01:14.034646 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 05:01:14.034777 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 05:01:14.034897 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 05:01:14.034998 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 05:01:14.035130 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 05:01:14.035249 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 05:01:14.035350 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 05:01:14.035453 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 05:01:14.035557 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 05:01:14.035659 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 05:01:14.035799 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 05:01:14.036041 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 05:01:14.036203 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 05:01:14.036381 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 05:01:14.036522 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 05:01:14.036733 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 05:01:14.037021 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 05:01:14.037185 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 05:01:14.037365 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 05:01:14.037557 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 05:01:14.037722 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 05:01:14.041111 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 05:01:14.041342 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 05:01:14.041509 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 05:01:14.041616 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 05:01:14.041729 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 05:01:14.043015 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 05:01:14.043217 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 05:01:14.043429 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 05:01:14.043635 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 05:01:14.045870 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 05:01:14.046095 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 05:01:14.046323 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 05:01:14.046517 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 05:01:14.046552 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 05:01:14.046576 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 05:01:14.046600 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 05:01:14.046625 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 05:01:14.046657 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 05:01:14.046680 kernel: iommu: Default domain type: Translated Jan 30 05:01:14.046705 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 05:01:14.046725 kernel: PCI: Using ACPI for IRQ routing Jan 30 05:01:14.047789 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 05:01:14.047813 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 05:01:14.047834 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 05:01:14.048048 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 05:01:14.048210 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 05:01:14.048367 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 05:01:14.048393 kernel: vgaarb: loaded Jan 30 05:01:14.048415 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 05:01:14.048436 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 05:01:14.048457 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 05:01:14.048478 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 05:01:14.048500 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 05:01:14.048521 kernel: pnp: PnP ACPI init Jan 30 05:01:14.048542 kernel: pnp: PnP ACPI: found 4 devices Jan 30 05:01:14.048568 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 05:01:14.048589 kernel: NET: Registered PF_INET protocol family Jan 30 05:01:14.048611 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 05:01:14.048632 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 05:01:14.048653 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 05:01:14.048675 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 05:01:14.048696 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 05:01:14.048717 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 05:01:14.051566 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:01:14.051611 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:01:14.051633 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 05:01:14.051655 kernel: NET: Registered PF_XDP protocol family Jan 30 05:01:14.051856 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 05:01:14.052029 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 05:01:14.052163 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 05:01:14.052293 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 05:01:14.052427 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 05:01:14.052593 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 05:01:14.052789 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 05:01:14.052819 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 05:01:14.052968 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 39922 usecs Jan 30 05:01:14.052995 kernel: PCI: CLS 0 bytes, default 64 Jan 30 05:01:14.053016 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 05:01:14.053038 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 30 05:01:14.053059 kernel: Initialise system trusted keyrings Jan 30 05:01:14.053087 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 05:01:14.053113 kernel: Key type asymmetric registered Jan 30 05:01:14.053137 kernel: Asymmetric key parser 'x509' registered Jan 30 05:01:14.053163 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 05:01:14.053187 kernel: io scheduler mq-deadline registered Jan 30 05:01:14.053208 kernel: io scheduler kyber registered Jan 30 05:01:14.053235 kernel: io scheduler bfq registered Jan 30 05:01:14.053258 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 05:01:14.053280 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 05:01:14.053301 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 05:01:14.053328 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 05:01:14.053354 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 05:01:14.053378 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 05:01:14.053399 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 05:01:14.053419 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 05:01:14.053446 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 05:01:14.053678 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 05:01:14.055961 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 05:01:14.056135 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 30 05:01:14.056262 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T05:01:13 UTC (1738213273) Jan 30 05:01:14.056360 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 05:01:14.056373 kernel: intel_pstate: CPU model not supported Jan 30 05:01:14.056383 kernel: NET: Registered PF_INET6 protocol family Jan 30 05:01:14.056393 kernel: Segment Routing with IPv6 Jan 30 05:01:14.056403 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 05:01:14.056412 kernel: NET: Registered PF_PACKET protocol family Jan 30 05:01:14.056428 kernel: Key type dns_resolver registered Jan 30 05:01:14.056438 kernel: IPI shorthand broadcast: enabled Jan 30 05:01:14.056448 kernel: sched_clock: Marking stable (1207003969, 176945861)->(1422037239, -38087409) Jan 30 05:01:14.056457 kernel: registered taskstats version 1 Jan 30 05:01:14.056467 kernel: Loading compiled-in X.509 certificates Jan 30 05:01:14.056477 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 05:01:14.056487 kernel: Key type .fscrypt registered Jan 30 05:01:14.056496 kernel: Key type fscrypt-provisioning registered Jan 30 05:01:14.056506 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 05:01:14.056518 kernel: ima: Allocated hash algorithm: sha1 Jan 30 05:01:14.056528 kernel: ima: No architecture policies found Jan 30 05:01:14.056538 kernel: clk: Disabling unused clocks Jan 30 05:01:14.056547 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 05:01:14.056557 kernel: Write protecting the kernel read-only data: 36864k Jan 30 05:01:14.056585 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 05:01:14.056597 kernel: Run /init as init process Jan 30 05:01:14.056608 kernel: with arguments: Jan 30 05:01:14.056618 kernel: /init Jan 30 05:01:14.056632 kernel: with environment: Jan 30 05:01:14.056642 kernel: HOME=/ Jan 30 05:01:14.056652 kernel: TERM=linux Jan 30 05:01:14.056661 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 05:01:14.056675 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:01:14.056688 systemd[1]: Detected virtualization kvm. Jan 30 05:01:14.056699 systemd[1]: Detected architecture x86-64. Jan 30 05:01:14.056709 systemd[1]: Running in initrd. Jan 30 05:01:14.056723 systemd[1]: No hostname configured, using default hostname. Jan 30 05:01:14.056733 systemd[1]: Hostname set to . Jan 30 05:01:14.056755 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:01:14.056766 systemd[1]: Queued start job for default target initrd.target. Jan 30 05:01:14.056776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:01:14.056787 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:01:14.056799 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 05:01:14.056810 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:01:14.056824 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 05:01:14.056834 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 05:01:14.056846 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 05:01:14.056857 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 05:01:14.056868 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:01:14.056879 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:01:14.056892 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:01:14.056903 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:01:14.056913 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:01:14.056927 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:01:14.056937 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:01:14.056948 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:01:14.056961 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 05:01:14.056972 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 05:01:14.056983 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:01:14.056993 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:01:14.057004 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:01:14.057014 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:01:14.057025 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 05:01:14.057036 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:01:14.057049 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 05:01:14.057060 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 05:01:14.057074 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:01:14.057095 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:01:14.057117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:01:14.057140 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 05:01:14.057210 systemd-journald[184]: Collecting audit messages is disabled. Jan 30 05:01:14.057265 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:01:14.057289 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 05:01:14.057315 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 05:01:14.057345 systemd-journald[184]: Journal started Jan 30 05:01:14.057396 systemd-journald[184]: Runtime Journal (/run/log/journal/cb952e0e7b434d32842aa38c84d36856) is 4.9M, max 39.3M, 34.4M free. Jan 30 05:01:14.019445 systemd-modules-load[185]: Inserted module 'overlay' Jan 30 05:01:14.107401 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:01:14.107467 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 05:01:14.107501 kernel: Bridge firewalling registered Jan 30 05:01:14.062365 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 30 05:01:14.109306 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:01:14.110395 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:14.116626 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:01:14.123965 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:01:14.131210 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:01:14.133493 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:01:14.140087 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:01:14.155145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:01:14.167108 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:01:14.169401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:01:14.171429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:01:14.178004 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 05:01:14.180997 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:01:14.201847 dracut-cmdline[217]: dracut-dracut-053 Jan 30 05:01:14.209177 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:01:14.239684 systemd-resolved[218]: Positive Trust Anchors: Jan 30 05:01:14.239727 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:01:14.239845 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:01:14.243591 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 30 05:01:14.244920 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:01:14.247836 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:01:14.332807 kernel: SCSI subsystem initialized Jan 30 05:01:14.344778 kernel: Loading iSCSI transport class v2.0-870. Jan 30 05:01:14.357805 kernel: iscsi: registered transport (tcp) Jan 30 05:01:14.384124 kernel: iscsi: registered transport (qla4xxx) Jan 30 05:01:14.384225 kernel: QLogic iSCSI HBA Driver Jan 30 05:01:14.444276 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 05:01:14.451047 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 05:01:14.487669 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 05:01:14.487774 kernel: device-mapper: uevent: version 1.0.3 Jan 30 05:01:14.490775 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 05:01:14.541790 kernel: raid6: avx2x4 gen() 18062 MB/s Jan 30 05:01:14.559793 kernel: raid6: avx2x2 gen() 17102 MB/s Jan 30 05:01:14.577096 kernel: raid6: avx2x1 gen() 13327 MB/s Jan 30 05:01:14.577198 kernel: raid6: using algorithm avx2x4 gen() 18062 MB/s Jan 30 05:01:14.596873 kernel: raid6: .... xor() 6766 MB/s, rmw enabled Jan 30 05:01:14.596981 kernel: raid6: using avx2x2 recovery algorithm Jan 30 05:01:14.622791 kernel: xor: automatically using best checksumming function avx Jan 30 05:01:14.817797 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 05:01:14.833818 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:01:14.841046 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:01:14.871006 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 30 05:01:14.879262 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:01:14.889496 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 05:01:14.924247 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 30 05:01:14.972643 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:01:14.980278 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:01:15.082246 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:01:15.091024 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 05:01:15.138383 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 05:01:15.143534 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:01:15.144404 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:01:15.149210 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:01:15.157069 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 05:01:15.190213 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:01:15.214794 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 05:01:15.312072 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 05:01:15.312296 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 05:01:15.312320 kernel: scsi host0: Virtio SCSI HBA Jan 30 05:01:15.312559 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 05:01:15.312582 kernel: GPT:9289727 != 125829119 Jan 30 05:01:15.312602 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 05:01:15.312623 kernel: GPT:9289727 != 125829119 Jan 30 05:01:15.312650 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 05:01:15.312670 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:01:15.312688 kernel: libata version 3.00 loaded. Jan 30 05:01:15.312710 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 05:01:15.328326 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 05:01:15.328364 kernel: AES CTR mode by8 optimization enabled Jan 30 05:01:15.328388 kernel: scsi host1: ata_piix Jan 30 05:01:15.328979 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 05:01:15.329225 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jan 30 05:01:15.329388 kernel: scsi host2: ata_piix Jan 30 05:01:15.329590 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 05:01:15.329615 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 05:01:15.296924 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:01:15.297168 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:01:15.298514 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:01:15.347963 kernel: ACPI: bus type USB registered Jan 30 05:01:15.348015 kernel: usbcore: registered new interface driver usbfs Jan 30 05:01:15.348044 kernel: usbcore: registered new interface driver hub Jan 30 05:01:15.348070 kernel: usbcore: registered new device driver usb Jan 30 05:01:15.299527 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:01:15.299842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:15.303432 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:01:15.317409 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:01:15.420820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:15.427138 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:01:15.457864 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:01:15.537529 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 05:01:15.560469 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 05:01:15.560729 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 05:01:15.560989 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Jan 30 05:01:15.561012 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 05:01:15.561227 kernel: hub 1-0:1.0: USB hub found Jan 30 05:01:15.561477 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (448) Jan 30 05:01:15.561501 kernel: hub 1-0:1.0: 2 ports detected Jan 30 05:01:15.542948 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 05:01:15.561149 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 05:01:15.585294 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 05:01:15.586050 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 05:01:15.598673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 05:01:15.606041 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 05:01:15.628986 disk-uuid[550]: Primary Header is updated. Jan 30 05:01:15.628986 disk-uuid[550]: Secondary Entries is updated. Jan 30 05:01:15.628986 disk-uuid[550]: Secondary Header is updated. Jan 30 05:01:15.634773 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:01:15.640907 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:01:15.650807 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:01:16.651850 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:01:16.652648 disk-uuid[551]: The operation has completed successfully. Jan 30 05:01:16.717998 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 05:01:16.719293 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 05:01:16.745070 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 05:01:16.749164 sh[564]: Success Jan 30 05:01:16.770776 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 05:01:16.845797 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 05:01:16.864948 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 05:01:16.866057 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 05:01:16.900819 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 05:01:16.900949 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:01:16.902452 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 05:01:16.905697 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 05:01:16.905828 kernel: BTRFS info (device dm-0): using free space tree Jan 30 05:01:16.919337 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 05:01:16.920301 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 05:01:16.927156 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 05:01:16.938090 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 05:01:16.953022 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:01:16.953145 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:01:16.955625 kernel: BTRFS info (device vda6): using free space tree Jan 30 05:01:16.960781 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 05:01:16.979822 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:01:16.979955 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 05:01:16.993808 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 05:01:17.003280 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 05:01:17.112095 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:01:17.122067 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:01:17.159123 systemd-networkd[748]: lo: Link UP Jan 30 05:01:17.159136 systemd-networkd[748]: lo: Gained carrier Jan 30 05:01:17.162006 systemd-networkd[748]: Enumeration completed Jan 30 05:01:17.162461 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 05:01:17.162466 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 05:01:17.163109 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:01:17.166142 systemd[1]: Reached target network.target - Network. Jan 30 05:01:17.168479 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:01:17.168485 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:01:17.169515 systemd-networkd[748]: eth0: Link UP Jan 30 05:01:17.169522 systemd-networkd[748]: eth0: Gained carrier Jan 30 05:01:17.169537 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 05:01:17.179633 systemd-networkd[748]: eth1: Link UP Jan 30 05:01:17.179640 systemd-networkd[748]: eth1: Gained carrier Jan 30 05:01:17.179681 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:01:17.199895 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.4/20 acquired from 169.254.169.253 Jan 30 05:01:17.205912 systemd-networkd[748]: eth0: DHCPv4 address 137.184.189.202/20, gateway 137.184.176.1 acquired from 169.254.169.253 Jan 30 05:01:17.221254 ignition[652]: Ignition 2.19.0 Jan 30 05:01:17.223782 ignition[652]: Stage: fetch-offline Jan 30 05:01:17.223976 ignition[652]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:17.223995 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:17.225864 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:01:17.224136 ignition[652]: parsed url from cmdline: "" Jan 30 05:01:17.224140 ignition[652]: no config URL provided Jan 30 05:01:17.224147 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:01:17.224158 ignition[652]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:01:17.224165 ignition[652]: failed to fetch config: resource requires networking Jan 30 05:01:17.224445 ignition[652]: Ignition finished successfully Jan 30 05:01:17.237121 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 05:01:17.263008 ignition[758]: Ignition 2.19.0 Jan 30 05:01:17.263022 ignition[758]: Stage: fetch Jan 30 05:01:17.263261 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:17.263273 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:17.263421 ignition[758]: parsed url from cmdline: "" Jan 30 05:01:17.263427 ignition[758]: no config URL provided Jan 30 05:01:17.263434 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:01:17.263446 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:01:17.263471 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 05:01:17.290125 ignition[758]: GET result: OK Jan 30 05:01:17.290975 ignition[758]: parsing config with SHA512: 762f504dd587792a004ebd191da453eaf0319f10f56762197d5ac1ec4d9fe7ab25a0f5ad592b86662ead433e13c6a6db00e3bc5efdc7646daaf5d7148bcd6394 Jan 30 05:01:17.299421 unknown[758]: fetched base config from "system" Jan 30 05:01:17.299445 unknown[758]: fetched base config from "system" Jan 30 05:01:17.300454 ignition[758]: fetch: fetch complete Jan 30 05:01:17.299456 unknown[758]: fetched user config from "digitalocean" Jan 30 05:01:17.300465 ignition[758]: fetch: fetch passed Jan 30 05:01:17.303106 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 05:01:17.300563 ignition[758]: Ignition finished successfully Jan 30 05:01:17.311155 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 05:01:17.346665 ignition[764]: Ignition 2.19.0 Jan 30 05:01:17.346684 ignition[764]: Stage: kargs Jan 30 05:01:17.347163 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:17.347187 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:17.348787 ignition[764]: kargs: kargs passed Jan 30 05:01:17.348904 ignition[764]: Ignition finished successfully Jan 30 05:01:17.350684 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 05:01:17.358187 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 05:01:17.397428 ignition[771]: Ignition 2.19.0 Jan 30 05:01:17.397443 ignition[771]: Stage: disks Jan 30 05:01:17.397720 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:17.397756 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:17.401641 ignition[771]: disks: disks passed Jan 30 05:01:17.403389 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 05:01:17.401802 ignition[771]: Ignition finished successfully Jan 30 05:01:17.408416 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 05:01:17.409764 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 05:01:17.410964 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:01:17.412287 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:01:17.413678 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:01:17.421142 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 05:01:17.446876 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 05:01:17.454341 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 05:01:17.462238 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 05:01:17.591769 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 05:01:17.593323 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 05:01:17.595507 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 05:01:17.609052 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:01:17.613464 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 05:01:17.616864 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 05:01:17.628304 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 05:01:17.642800 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) Jan 30 05:01:17.642851 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:01:17.642876 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:01:17.642900 kernel: BTRFS info (device vda6): using free space tree Jan 30 05:01:17.641046 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 05:01:17.641105 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:01:17.648289 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 05:01:17.655785 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 05:01:17.658795 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 05:01:17.667004 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:01:17.761781 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 05:01:17.770127 coreos-metadata[790]: Jan 30 05:01:17.770 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:01:17.776881 coreos-metadata[789]: Jan 30 05:01:17.776 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:01:17.780939 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 30 05:01:17.786836 coreos-metadata[790]: Jan 30 05:01:17.785 INFO Fetch successful Jan 30 05:01:17.789927 coreos-metadata[789]: Jan 30 05:01:17.789 INFO Fetch successful Jan 30 05:01:17.798785 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 05:01:17.801656 coreos-metadata[790]: Jan 30 05:01:17.798 INFO wrote hostname ci-4081.3.0-0-0f8f4a9941 to /sysroot/etc/hostname Jan 30 05:01:17.802858 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:01:17.808953 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 05:01:17.809941 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 05:01:17.810993 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 05:01:17.966667 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 05:01:17.971993 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 05:01:17.976046 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 05:01:17.990857 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 05:01:17.993117 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:01:18.032209 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 05:01:18.038434 ignition[908]: INFO : Ignition 2.19.0 Jan 30 05:01:18.038434 ignition[908]: INFO : Stage: mount Jan 30 05:01:18.040552 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:18.040552 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:18.040552 ignition[908]: INFO : mount: mount passed Jan 30 05:01:18.044488 ignition[908]: INFO : Ignition finished successfully Jan 30 05:01:18.042056 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 05:01:18.052027 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 05:01:18.072289 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:01:18.090824 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (919) Jan 30 05:01:18.095072 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:01:18.095167 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:01:18.097052 kernel: BTRFS info (device vda6): using free space tree Jan 30 05:01:18.103789 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 05:01:18.107998 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:01:18.152348 ignition[936]: INFO : Ignition 2.19.0 Jan 30 05:01:18.152348 ignition[936]: INFO : Stage: files Jan 30 05:01:18.154169 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:18.154169 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:18.154169 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Jan 30 05:01:18.156943 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 05:01:18.156943 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 05:01:18.161326 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 05:01:18.162554 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 05:01:18.162554 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 05:01:18.162013 unknown[936]: wrote ssh authorized keys file for user: core Jan 30 05:01:18.165653 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 05:01:18.165653 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 05:01:18.208296 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 05:01:18.520757 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 05:01:18.522332 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 05:01:18.522332 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 05:01:18.522332 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:01:18.522332 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:01:18.522332 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:01:18.522332 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:01:18.522332 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:01:18.522332 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:01:18.522332 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:01:18.533904 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:01:18.533904 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:01:18.533904 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:01:18.533904 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:01:18.533904 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 05:01:18.665144 systemd-networkd[748]: eth0: Gained IPv6LL Jan 30 05:01:18.729026 systemd-networkd[748]: eth1: Gained IPv6LL Jan 30 05:01:19.120063 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 05:01:19.533869 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:01:19.533869 ignition[936]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 05:01:19.536684 ignition[936]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:01:19.536684 ignition[936]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:01:19.536684 ignition[936]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 05:01:19.536684 ignition[936]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 05:01:19.541478 ignition[936]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 05:01:19.541478 ignition[936]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:01:19.541478 ignition[936]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:01:19.541478 ignition[936]: INFO : files: files passed Jan 30 05:01:19.541478 ignition[936]: INFO : Ignition finished successfully Jan 30 05:01:19.539272 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 05:01:19.551083 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 05:01:19.557275 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 05:01:19.560883 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 05:01:19.562041 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 05:01:19.594181 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:01:19.594181 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:01:19.598459 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:01:19.601725 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:01:19.604409 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 05:01:19.619162 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 05:01:19.673842 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 05:01:19.674098 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 05:01:19.677053 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 05:01:19.677908 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 05:01:19.679724 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 05:01:19.691032 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 05:01:19.711389 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:01:19.716062 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 05:01:19.751948 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:01:19.753933 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:01:19.754881 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 05:01:19.756433 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 05:01:19.756665 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:01:19.758344 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 05:01:19.760100 systemd[1]: Stopped target basic.target - Basic System. Jan 30 05:01:19.761028 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 05:01:19.762198 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:01:19.763436 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 05:01:19.764884 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 05:01:19.766023 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:01:19.767354 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 05:01:19.768946 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 05:01:19.770361 systemd[1]: Stopped target swap.target - Swaps. Jan 30 05:01:19.771514 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 05:01:19.771762 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:01:19.773378 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:01:19.774352 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:01:19.775533 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 05:01:19.775692 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:01:19.777039 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 05:01:19.777273 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 05:01:19.779152 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 05:01:19.779494 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:01:19.781169 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 05:01:19.781467 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 05:01:19.782363 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 05:01:19.782599 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:01:19.791349 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 05:01:19.801314 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 05:01:19.802142 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 05:01:19.803077 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:01:19.806476 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 05:01:19.806706 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:01:19.815713 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 05:01:19.815945 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 05:01:19.839807 ignition[989]: INFO : Ignition 2.19.0 Jan 30 05:01:19.839807 ignition[989]: INFO : Stage: umount Jan 30 05:01:19.839807 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:19.839807 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:19.848578 ignition[989]: INFO : umount: umount passed Jan 30 05:01:19.848578 ignition[989]: INFO : Ignition finished successfully Jan 30 05:01:19.846494 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 05:01:19.847552 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 05:01:19.847977 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 05:01:19.852175 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 05:01:19.852264 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 05:01:19.853078 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 05:01:19.853177 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 05:01:19.854539 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 05:01:19.854631 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 05:01:19.862619 systemd[1]: Stopped target network.target - Network. Jan 30 05:01:19.863264 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 05:01:19.863406 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:01:19.867106 systemd[1]: Stopped target paths.target - Path Units. Jan 30 05:01:19.868542 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 05:01:19.872878 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:01:19.873845 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 05:01:19.875073 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 05:01:19.876433 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 05:01:19.876545 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:01:19.878431 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 05:01:19.878510 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:01:19.881100 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 05:01:19.881208 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 05:01:19.882770 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 05:01:19.882888 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 05:01:19.884592 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 05:01:19.888713 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 05:01:19.889912 systemd-networkd[748]: eth0: DHCPv6 lease lost Jan 30 05:01:19.899945 systemd-networkd[748]: eth1: DHCPv6 lease lost Jan 30 05:01:19.908372 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 05:01:19.908606 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 05:01:19.915117 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 05:01:19.915802 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 05:01:19.920073 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 05:01:19.922111 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 05:01:19.926479 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 05:01:19.926584 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:01:19.928285 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 05:01:19.928402 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 05:01:19.936121 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 05:01:19.936779 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 05:01:19.936891 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:01:19.939353 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 05:01:19.939475 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:01:19.941435 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 05:01:19.941543 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 05:01:19.943587 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 05:01:19.943672 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:01:19.948012 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:01:19.964625 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 05:01:19.964917 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:01:19.969798 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 05:01:19.969888 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 05:01:19.970613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 05:01:19.970670 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:01:19.971333 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 05:01:19.971430 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:01:19.973176 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 05:01:19.973298 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 05:01:19.974953 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:01:19.975042 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:01:19.984098 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 05:01:19.984932 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 05:01:19.985072 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:01:19.986538 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 05:01:19.986629 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:01:19.992345 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 05:01:19.992448 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:01:19.995313 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:01:19.995453 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:19.998948 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 05:01:20.000819 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 05:01:20.007380 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 05:01:20.007573 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 05:01:20.009730 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 05:01:20.015030 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 05:01:20.045367 systemd[1]: Switching root. Jan 30 05:01:20.121963 systemd-journald[184]: Journal stopped Jan 30 05:01:21.679196 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 30 05:01:21.679319 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 05:01:21.679355 kernel: SELinux: policy capability open_perms=1 Jan 30 05:01:21.679375 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 05:01:21.679400 kernel: SELinux: policy capability always_check_network=0 Jan 30 05:01:21.679419 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 05:01:21.679438 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 05:01:21.679456 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 05:01:21.679475 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 05:01:21.679502 kernel: audit: type=1403 audit(1738213280.296:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 05:01:21.679530 systemd[1]: Successfully loaded SELinux policy in 50.088ms. Jan 30 05:01:21.679562 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.180ms. Jan 30 05:01:21.679590 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:01:21.679612 systemd[1]: Detected virtualization kvm. Jan 30 05:01:21.679634 systemd[1]: Detected architecture x86-64. Jan 30 05:01:21.679655 systemd[1]: Detected first boot. Jan 30 05:01:21.679677 systemd[1]: Hostname set to . Jan 30 05:01:21.679701 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:01:21.679767 zram_generator::config[1031]: No configuration found. Jan 30 05:01:21.679792 systemd[1]: Populated /etc with preset unit settings. Jan 30 05:01:21.679824 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 05:01:21.679848 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 05:01:21.679868 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 05:01:21.679892 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 05:01:21.679913 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 05:01:21.679935 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 05:01:21.679956 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 05:01:21.679975 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 05:01:21.679998 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 05:01:21.680018 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 05:01:21.680039 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 05:01:21.680058 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:01:21.680077 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:01:21.680096 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 05:01:21.680114 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 05:01:21.680136 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 05:01:21.680186 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:01:21.680215 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 05:01:21.680237 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:01:21.680264 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 05:01:21.680285 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 05:01:21.680304 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 05:01:21.680323 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 05:01:21.680346 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:01:21.680368 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:01:21.680387 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:01:21.680413 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:01:21.680432 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 05:01:21.680450 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 05:01:21.680469 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:01:21.680490 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:01:21.680514 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:01:21.680544 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 05:01:21.680570 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 05:01:21.680589 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 05:01:21.680608 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 05:01:21.680626 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:21.680647 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 05:01:21.680669 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 05:01:21.680687 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 05:01:21.680709 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 05:01:21.680733 systemd[1]: Reached target machines.target - Containers. Jan 30 05:01:21.680768 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 05:01:21.680788 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:01:21.680807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:01:21.680827 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 05:01:21.680848 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:01:21.680871 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:01:21.680893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:01:21.680913 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 05:01:21.680943 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:01:21.680964 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 05:01:21.680983 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 05:01:21.681004 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 05:01:21.681026 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 05:01:21.681046 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 05:01:21.681066 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:01:21.681094 kernel: fuse: init (API version 7.39) Jan 30 05:01:21.681118 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:01:21.681137 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 05:01:21.681156 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 05:01:21.681176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:01:21.681196 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 05:01:21.681214 systemd[1]: Stopped verity-setup.service. Jan 30 05:01:21.681236 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:21.681257 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 05:01:21.681278 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 05:01:21.681303 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 05:01:21.681325 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 05:01:21.681344 kernel: ACPI: bus type drm_connector registered Jan 30 05:01:21.683104 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 05:01:21.683146 kernel: loop: module loaded Jan 30 05:01:21.683180 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 05:01:21.683204 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:01:21.683287 systemd-journald[1111]: Collecting audit messages is disabled. Jan 30 05:01:21.683335 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 05:01:21.683364 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 05:01:21.683389 systemd-journald[1111]: Journal started Jan 30 05:01:21.683441 systemd-journald[1111]: Runtime Journal (/run/log/journal/cb952e0e7b434d32842aa38c84d36856) is 4.9M, max 39.3M, 34.4M free. Jan 30 05:01:21.250207 systemd[1]: Queued start job for default target multi-user.target. Jan 30 05:01:21.275467 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 05:01:21.276171 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 05:01:21.688980 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:01:21.692482 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 05:01:21.694021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:01:21.694458 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:01:21.696343 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:01:21.696598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:01:21.698080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:01:21.698457 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:01:21.699867 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 05:01:21.700093 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 05:01:21.701852 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:01:21.702232 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:01:21.703548 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:01:21.704937 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 05:01:21.706265 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 05:01:21.721268 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 05:01:21.732871 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 05:01:21.739953 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 05:01:21.744115 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 05:01:21.744168 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:01:21.747012 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 05:01:21.753117 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 05:01:21.761022 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 05:01:21.761991 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:01:21.775196 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 05:01:21.779768 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 05:01:21.780733 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:01:21.793150 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 05:01:21.794023 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:01:21.803130 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:01:21.813097 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 05:01:21.817592 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 05:01:21.821730 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 05:01:21.823564 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 05:01:21.824827 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 05:01:21.870906 systemd-journald[1111]: Time spent on flushing to /var/log/journal/cb952e0e7b434d32842aa38c84d36856 is 55.536ms for 989 entries. Jan 30 05:01:21.870906 systemd-journald[1111]: System Journal (/var/log/journal/cb952e0e7b434d32842aa38c84d36856) is 8.0M, max 195.6M, 187.6M free. Jan 30 05:01:21.971233 systemd-journald[1111]: Received client request to flush runtime journal. Jan 30 05:01:21.971307 kernel: loop0: detected capacity change from 0 to 210664 Jan 30 05:01:21.971363 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 05:01:21.907298 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 05:01:21.910643 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 05:01:21.925140 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 05:01:21.926939 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:01:21.938075 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 05:01:21.973076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:01:21.980503 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 05:01:22.023130 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 05:01:22.035004 kernel: loop1: detected capacity change from 0 to 8 Jan 30 05:01:22.033377 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Jan 30 05:01:22.033399 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Jan 30 05:01:22.043615 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:01:22.061315 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 05:01:22.065795 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 05:01:22.122249 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 05:01:22.124319 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 05:01:22.154784 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 05:01:22.165146 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 05:01:22.178398 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:01:22.256291 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 30 05:01:22.256325 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 30 05:01:22.259840 kernel: loop4: detected capacity change from 0 to 210664 Jan 30 05:01:22.279173 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:01:22.296783 kernel: loop5: detected capacity change from 0 to 8 Jan 30 05:01:22.305779 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 05:01:22.337795 kernel: loop7: detected capacity change from 0 to 142488 Jan 30 05:01:22.375295 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 05:01:22.376156 (sd-merge)[1178]: Merged extensions into '/usr'. Jan 30 05:01:22.384043 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 05:01:22.384258 systemd[1]: Reloading... Jan 30 05:01:22.590052 zram_generator::config[1203]: No configuration found. Jan 30 05:01:22.859801 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 05:01:22.940913 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:01:23.046682 systemd[1]: Reloading finished in 661 ms. Jan 30 05:01:23.075803 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 05:01:23.082241 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 05:01:23.092118 systemd[1]: Starting ensure-sysext.service... Jan 30 05:01:23.106931 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:01:23.133968 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jan 30 05:01:23.133994 systemd[1]: Reloading... Jan 30 05:01:23.150154 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 05:01:23.151173 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 05:01:23.152905 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 05:01:23.153419 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 30 05:01:23.153527 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 30 05:01:23.159882 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:01:23.159899 systemd-tmpfiles[1249]: Skipping /boot Jan 30 05:01:23.181338 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:01:23.181360 systemd-tmpfiles[1249]: Skipping /boot Jan 30 05:01:23.303783 zram_generator::config[1273]: No configuration found. Jan 30 05:01:23.521102 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:01:23.602407 systemd[1]: Reloading finished in 467 ms. Jan 30 05:01:23.617243 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 05:01:23.618495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:01:23.634000 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 05:01:23.639989 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 05:01:23.643071 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 05:01:23.654038 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:01:23.658221 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:01:23.667096 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 05:01:23.679704 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:23.680099 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:01:23.686227 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:01:23.698143 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:01:23.702991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:01:23.704240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:01:23.704381 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:23.709123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:23.709321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:01:23.709497 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:01:23.718122 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 05:01:23.718779 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:23.725951 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:23.726459 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:01:23.736257 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:01:23.737540 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:01:23.737729 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:23.744423 systemd[1]: Finished ensure-sysext.service. Jan 30 05:01:23.761248 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 05:01:23.763519 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 05:01:23.777385 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:01:23.778239 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:01:23.780434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:01:23.780652 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:01:23.782184 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:01:23.782671 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:01:23.786909 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:01:23.787129 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:01:23.794286 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:01:23.795088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:01:23.803211 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 05:01:23.808422 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 05:01:23.812009 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jan 30 05:01:23.835659 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 05:01:23.855982 augenrules[1357]: No rules Jan 30 05:01:23.865383 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 05:01:23.870043 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:01:23.878352 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:01:23.879441 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 05:01:23.883843 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 05:01:23.911891 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 05:01:24.033201 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 05:01:24.034974 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 05:01:24.040111 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 05:01:24.053936 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 05:01:24.054493 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:24.054676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:01:24.063990 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:01:24.070124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:01:24.083118 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:01:24.083963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:01:24.084006 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 05:01:24.084024 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:24.095128 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 05:01:24.102493 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 05:01:24.104392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:01:24.104873 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:01:24.127458 systemd-networkd[1366]: lo: Link UP Jan 30 05:01:24.127469 systemd-networkd[1366]: lo: Gained carrier Jan 30 05:01:24.128915 systemd-networkd[1366]: Enumeration completed Jan 30 05:01:24.129048 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:01:24.138991 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 05:01:24.141564 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:01:24.141783 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:01:24.144131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:01:24.146679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:01:24.146904 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:01:24.148423 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:01:24.186854 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1364) Jan 30 05:01:24.200451 systemd-resolved[1325]: Positive Trust Anchors: Jan 30 05:01:24.201004 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:01:24.201072 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:01:24.211405 systemd-resolved[1325]: Using system hostname 'ci-4081.3.0-0-0f8f4a9941'. Jan 30 05:01:24.214848 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:01:24.215523 systemd[1]: Reached target network.target - Network. Jan 30 05:01:24.216843 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:01:24.229782 systemd-networkd[1366]: eth0: Configuring with /run/systemd/network/10-c2:43:e3:c1:1f:bb.network. Jan 30 05:01:24.232881 systemd-networkd[1366]: eth0: Link UP Jan 30 05:01:24.232891 systemd-networkd[1366]: eth0: Gained carrier Jan 30 05:01:24.236355 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Jan 30 05:01:24.295906 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 05:01:24.326823 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 05:01:24.332763 kernel: ACPI: button: Power Button [PWRF] Jan 30 05:01:24.336227 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 05:01:24.337761 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 05:01:24.337848 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 05:01:24.342818 kernel: Console: switching to colour dummy device 80x25 Jan 30 05:01:24.342900 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 05:01:24.342930 kernel: [drm] features: -context_init Jan 30 05:01:24.343069 systemd-networkd[1366]: eth1: Configuring with /run/systemd/network/10-86:8c:98:17:a6:63.network. Jan 30 05:01:24.345806 kernel: [drm] number of scanouts: 1 Jan 30 05:01:24.345914 kernel: [drm] number of cap sets: 0 Jan 30 05:01:24.346600 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Jan 30 05:01:24.347408 systemd-networkd[1366]: eth1: Link UP Jan 30 05:01:24.347416 systemd-networkd[1366]: eth1: Gained carrier Jan 30 05:01:24.349446 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 05:01:24.353466 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Jan 30 05:01:24.354225 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 05:01:24.354901 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Jan 30 05:01:24.363811 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 05:01:24.389184 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 05:01:24.389801 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 05:01:24.393221 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 05:01:24.404764 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 05:01:24.436784 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 05:01:24.489384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:01:24.491505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:01:24.491845 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:24.511342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:01:24.574091 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:01:24.576194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:24.587062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:01:24.637229 kernel: EDAC MC: Ver: 3.0.0 Jan 30 05:01:24.664425 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 05:01:24.681398 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 05:01:24.703805 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:24.705662 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:01:24.741415 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 05:01:24.743240 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:01:24.743397 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:01:24.743728 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 05:01:24.743948 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 05:01:24.744333 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 05:01:24.744589 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 05:01:24.744702 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 05:01:24.747184 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 05:01:24.747247 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:01:24.747371 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:01:24.750560 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 05:01:24.755085 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 05:01:24.765672 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 05:01:24.769824 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 05:01:24.773389 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 05:01:24.776366 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:01:24.776924 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:01:24.777457 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:01:24.777501 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:01:24.783910 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 05:01:24.791863 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:01:24.803082 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 05:01:24.808977 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 05:01:24.816907 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 05:01:24.831165 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 05:01:24.831871 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 05:01:24.836324 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 05:01:24.847902 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 05:01:24.863608 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 05:01:24.874041 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 05:01:24.877007 coreos-metadata[1436]: Jan 30 05:01:24.876 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:01:24.883127 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 05:01:24.886476 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 05:01:24.890878 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 05:01:24.896761 coreos-metadata[1436]: Jan 30 05:01:24.894 INFO Fetch successful Jan 30 05:01:24.899033 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 05:01:24.907016 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 05:01:24.911864 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 05:01:24.947787 jq[1438]: false Jan 30 05:01:24.957320 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 05:01:24.957669 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 05:01:24.958155 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 05:01:24.958385 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 05:01:24.972140 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 05:01:24.972851 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 05:01:25.016774 update_engine[1448]: I20250130 05:01:25.009495 1448 main.cc:92] Flatcar Update Engine starting Jan 30 05:01:25.018395 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 05:01:25.023773 extend-filesystems[1441]: Found loop4 Jan 30 05:01:25.023773 extend-filesystems[1441]: Found loop5 Jan 30 05:01:25.023773 extend-filesystems[1441]: Found loop6 Jan 30 05:01:25.023773 extend-filesystems[1441]: Found loop7 Jan 30 05:01:25.023773 extend-filesystems[1441]: Found vda Jan 30 05:01:25.023773 extend-filesystems[1441]: Found vda1 Jan 30 05:01:25.023773 extend-filesystems[1441]: Found vda2 Jan 30 05:01:25.023773 extend-filesystems[1441]: Found vda3 Jan 30 05:01:25.023773 extend-filesystems[1441]: Found usr Jan 30 05:01:25.023773 extend-filesystems[1441]: Found vda4 Jan 30 05:01:25.023773 extend-filesystems[1441]: Found vda6 Jan 30 05:01:25.084507 extend-filesystems[1441]: Found vda7 Jan 30 05:01:25.084507 extend-filesystems[1441]: Found vda9 Jan 30 05:01:25.084507 extend-filesystems[1441]: Checking size of /dev/vda9 Jan 30 05:01:25.089890 update_engine[1448]: I20250130 05:01:25.060937 1448 update_check_scheduler.cc:74] Next update check in 3m12s Jan 30 05:01:25.052210 dbus-daemon[1437]: [system] SELinux support is enabled Jan 30 05:01:25.090417 tar[1457]: linux-amd64/helm Jan 30 05:01:25.028771 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 05:01:25.091153 jq[1449]: true Jan 30 05:01:25.048935 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 05:01:25.054913 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 05:01:25.065293 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 05:01:25.065457 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 05:01:25.065499 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 05:01:25.075733 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 05:01:25.076691 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 05:01:25.076721 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 05:01:25.086157 systemd[1]: Started update-engine.service - Update Engine. Jan 30 05:01:25.099070 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 05:01:25.115274 extend-filesystems[1441]: Resized partition /dev/vda9 Jan 30 05:01:25.133286 extend-filesystems[1483]: resize2fs 1.47.1 (20-May-2024) Jan 30 05:01:25.136319 jq[1474]: true Jan 30 05:01:25.148819 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 05:01:25.170392 systemd-logind[1447]: New seat seat0. Jan 30 05:01:25.186866 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 05:01:25.186902 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 05:01:25.187276 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 05:01:25.241601 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1377) Jan 30 05:01:25.378814 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 05:01:25.367061 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 05:01:25.415915 extend-filesystems[1483]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 05:01:25.415915 extend-filesystems[1483]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 05:01:25.415915 extend-filesystems[1483]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 05:01:25.414354 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 05:01:25.440228 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Jan 30 05:01:25.440228 extend-filesystems[1441]: Found vdb Jan 30 05:01:25.414715 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 05:01:25.447262 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:01:25.449707 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 05:01:25.465501 systemd[1]: Starting sshkeys.service... Jan 30 05:01:25.529421 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 05:01:25.544517 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 05:01:25.677051 coreos-metadata[1514]: Jan 30 05:01:25.676 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:01:25.694086 coreos-metadata[1514]: Jan 30 05:01:25.693 INFO Fetch successful Jan 30 05:01:25.707054 unknown[1514]: wrote ssh authorized keys file for user: core Jan 30 05:01:25.748397 containerd[1465]: time="2025-01-30T05:01:25.746443524Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 05:01:25.753305 update-ssh-keys[1519]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:01:25.756227 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 05:01:25.764402 systemd[1]: Finished sshkeys.service. Jan 30 05:01:25.823396 containerd[1465]: time="2025-01-30T05:01:25.822939283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:25.827443 containerd[1465]: time="2025-01-30T05:01:25.826814232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:01:25.827443 containerd[1465]: time="2025-01-30T05:01:25.826891413Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 05:01:25.827443 containerd[1465]: time="2025-01-30T05:01:25.826925223Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 05:01:25.827443 containerd[1465]: time="2025-01-30T05:01:25.827196430Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 05:01:25.827443 containerd[1465]: time="2025-01-30T05:01:25.827228825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:25.827443 containerd[1465]: time="2025-01-30T05:01:25.827322540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:01:25.827443 containerd[1465]: time="2025-01-30T05:01:25.827340714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:25.828305 containerd[1465]: time="2025-01-30T05:01:25.828256035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:01:25.829116 containerd[1465]: time="2025-01-30T05:01:25.828420072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:25.829116 containerd[1465]: time="2025-01-30T05:01:25.828457594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:01:25.829116 containerd[1465]: time="2025-01-30T05:01:25.828477729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:25.829116 containerd[1465]: time="2025-01-30T05:01:25.828647201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:25.829116 containerd[1465]: time="2025-01-30T05:01:25.829056929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:25.829682 containerd[1465]: time="2025-01-30T05:01:25.829643874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:01:25.829890 containerd[1465]: time="2025-01-30T05:01:25.829866336Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 05:01:25.830157 containerd[1465]: time="2025-01-30T05:01:25.830129859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 05:01:25.830800 containerd[1465]: time="2025-01-30T05:01:25.830555716Z" level=info msg="metadata content store policy set" policy=shared Jan 30 05:01:25.846310 containerd[1465]: time="2025-01-30T05:01:25.846245479Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 05:01:25.847275 containerd[1465]: time="2025-01-30T05:01:25.846729074Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 05:01:25.847275 containerd[1465]: time="2025-01-30T05:01:25.846844570Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 05:01:25.847275 containerd[1465]: time="2025-01-30T05:01:25.846869753Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 05:01:25.847275 containerd[1465]: time="2025-01-30T05:01:25.846899414Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 05:01:25.847275 containerd[1465]: time="2025-01-30T05:01:25.847125925Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848197709Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848380515Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848403073Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848438762Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848462219Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848482982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848534947Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848555338Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848576728Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848595507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848614593Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848634288Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848669130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.849763 containerd[1465]: time="2025-01-30T05:01:25.848692933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.848715597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.848763871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.848782828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.848801058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.848818625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.848863174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.848885724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.848906947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.848926469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.848950972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.848973931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.849003450Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.849035058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.849053568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850151 containerd[1465]: time="2025-01-30T05:01:25.849068581Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 05:01:25.850596 containerd[1465]: time="2025-01-30T05:01:25.849137937Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 05:01:25.850596 containerd[1465]: time="2025-01-30T05:01:25.849165412Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 05:01:25.850596 containerd[1465]: time="2025-01-30T05:01:25.849251238Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 05:01:25.850596 containerd[1465]: time="2025-01-30T05:01:25.849300832Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 05:01:25.850596 containerd[1465]: time="2025-01-30T05:01:25.849316620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850596 containerd[1465]: time="2025-01-30T05:01:25.849334382Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 05:01:25.850596 containerd[1465]: time="2025-01-30T05:01:25.849348018Z" level=info msg="NRI interface is disabled by configuration." Jan 30 05:01:25.850596 containerd[1465]: time="2025-01-30T05:01:25.849363110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 05:01:25.850876 containerd[1465]: time="2025-01-30T05:01:25.850052754Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 05:01:25.850876 containerd[1465]: time="2025-01-30T05:01:25.850164696Z" level=info msg="Connect containerd service" Jan 30 05:01:25.850876 containerd[1465]: time="2025-01-30T05:01:25.850221001Z" level=info msg="using legacy CRI server" Jan 30 05:01:25.850876 containerd[1465]: time="2025-01-30T05:01:25.850232749Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 05:01:25.850876 containerd[1465]: time="2025-01-30T05:01:25.850397938Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 05:01:25.854582 containerd[1465]: time="2025-01-30T05:01:25.851332364Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 05:01:25.854582 containerd[1465]: time="2025-01-30T05:01:25.851489138Z" level=info msg="Start subscribing containerd event" Jan 30 05:01:25.854582 containerd[1465]: time="2025-01-30T05:01:25.851582610Z" level=info msg="Start recovering state" Jan 30 05:01:25.854582 containerd[1465]: time="2025-01-30T05:01:25.851665300Z" level=info msg="Start event monitor" Jan 30 05:01:25.854582 containerd[1465]: time="2025-01-30T05:01:25.851684386Z" level=info msg="Start snapshots syncer" Jan 30 05:01:25.854582 containerd[1465]: time="2025-01-30T05:01:25.851697771Z" level=info msg="Start cni network conf syncer for default" Jan 30 05:01:25.854582 containerd[1465]: time="2025-01-30T05:01:25.851708654Z" level=info msg="Start streaming server" Jan 30 05:01:25.854582 containerd[1465]: time="2025-01-30T05:01:25.852482042Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 05:01:25.854582 containerd[1465]: time="2025-01-30T05:01:25.852619640Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 05:01:25.854436 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 05:01:25.857722 containerd[1465]: time="2025-01-30T05:01:25.856605957Z" level=info msg="containerd successfully booted in 0.115958s" Jan 30 05:01:25.904932 sshd_keygen[1482]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 05:01:25.941115 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 05:01:25.952194 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 05:01:25.963495 systemd[1]: Started sshd@0-137.184.189.202:22-147.75.109.163:32962.service - OpenSSH per-connection server daemon (147.75.109.163:32962). Jan 30 05:01:25.988712 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 05:01:25.989041 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 05:01:26.006321 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 05:01:26.041486 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 05:01:26.055212 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 05:01:26.068626 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 05:01:26.073997 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 05:01:26.124296 sshd[1533]: Accepted publickey for core from 147.75.109.163 port 32962 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:01:26.132672 sshd[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:01:26.154060 systemd-networkd[1366]: eth0: Gained IPv6LL Jan 30 05:01:26.156932 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Jan 30 05:01:26.157915 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 05:01:26.168389 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 05:01:26.174265 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 05:01:26.187518 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 05:01:26.187719 systemd-logind[1447]: New session 1 of user core. Jan 30 05:01:26.204934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:01:26.217023 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 05:01:26.231860 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 05:01:26.256379 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 05:01:26.273908 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 05:01:26.289636 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 05:01:26.345097 systemd-networkd[1366]: eth1: Gained IPv6LL Jan 30 05:01:26.345577 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Jan 30 05:01:26.350110 tar[1457]: linux-amd64/LICENSE Jan 30 05:01:26.350110 tar[1457]: linux-amd64/README.md Jan 30 05:01:26.402192 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 05:01:26.485273 systemd[1552]: Queued start job for default target default.target. Jan 30 05:01:26.500845 systemd[1552]: Created slice app.slice - User Application Slice. Jan 30 05:01:26.500913 systemd[1552]: Reached target paths.target - Paths. Jan 30 05:01:26.500939 systemd[1552]: Reached target timers.target - Timers. Jan 30 05:01:26.511263 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 05:01:26.529367 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 05:01:26.530722 systemd[1552]: Reached target sockets.target - Sockets. Jan 30 05:01:26.530790 systemd[1552]: Reached target basic.target - Basic System. Jan 30 05:01:26.530885 systemd[1552]: Reached target default.target - Main User Target. Jan 30 05:01:26.530939 systemd[1552]: Startup finished in 237ms. Jan 30 05:01:26.532966 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 05:01:26.545077 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 05:01:26.635994 systemd[1]: Started sshd@1-137.184.189.202:22-147.75.109.163:32974.service - OpenSSH per-connection server daemon (147.75.109.163:32974). Jan 30 05:01:26.717044 sshd[1570]: Accepted publickey for core from 147.75.109.163 port 32974 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:01:26.718696 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:01:26.730075 systemd-logind[1447]: New session 2 of user core. Jan 30 05:01:26.744214 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 05:01:26.820596 sshd[1570]: pam_unix(sshd:session): session closed for user core Jan 30 05:01:26.837042 systemd[1]: sshd@1-137.184.189.202:22-147.75.109.163:32974.service: Deactivated successfully. Jan 30 05:01:26.840496 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 05:01:26.847461 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Jan 30 05:01:26.856258 systemd[1]: Started sshd@2-137.184.189.202:22-147.75.109.163:42150.service - OpenSSH per-connection server daemon (147.75.109.163:42150). Jan 30 05:01:26.863031 systemd-logind[1447]: Removed session 2. Jan 30 05:01:26.920866 sshd[1577]: Accepted publickey for core from 147.75.109.163 port 42150 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:01:26.922541 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:01:26.932168 systemd-logind[1447]: New session 3 of user core. Jan 30 05:01:26.940132 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 05:01:27.016133 sshd[1577]: pam_unix(sshd:session): session closed for user core Jan 30 05:01:27.023909 systemd[1]: sshd@2-137.184.189.202:22-147.75.109.163:42150.service: Deactivated successfully. Jan 30 05:01:27.028581 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 05:01:27.032709 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Jan 30 05:01:27.034334 systemd-logind[1447]: Removed session 3. Jan 30 05:01:27.596155 systemd[1]: Started sshd@3-137.184.189.202:22-116.105.221.82:51708.service - OpenSSH per-connection server daemon (116.105.221.82:51708). Jan 30 05:01:27.613977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:01:27.622925 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 05:01:27.624940 systemd[1]: Startup finished in 1.358s (kernel) + 6.564s (initrd) + 7.376s (userspace) = 15.299s. Jan 30 05:01:27.643372 (kubelet)[1589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:01:28.548309 kubelet[1589]: E0130 05:01:28.548170 1589 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:01:28.551715 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:01:28.551973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:01:28.552662 systemd[1]: kubelet.service: Consumed 1.403s CPU time. Jan 30 05:01:28.990643 sshd[1588]: Invalid user user from 116.105.221.82 port 51708 Jan 30 05:01:32.965960 sshd[1603]: pam_faillock(sshd:auth): User unknown Jan 30 05:01:32.970426 sshd[1588]: Postponed keyboard-interactive for invalid user user from 116.105.221.82 port 51708 ssh2 [preauth] Jan 30 05:01:33.227061 sshd[1603]: pam_unix(sshd:auth): check pass; user unknown Jan 30 05:01:33.227114 sshd[1603]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.105.221.82 Jan 30 05:01:33.228433 sshd[1603]: pam_faillock(sshd:auth): User unknown Jan 30 05:01:35.664167 sshd[1588]: PAM: Permission denied for illegal user user from 116.105.221.82 Jan 30 05:01:35.664873 sshd[1588]: Failed keyboard-interactive/pam for invalid user user from 116.105.221.82 port 51708 ssh2 Jan 30 05:01:35.856045 sshd[1588]: Connection closed by invalid user user 116.105.221.82 port 51708 [preauth] Jan 30 05:01:35.858909 systemd[1]: sshd@3-137.184.189.202:22-116.105.221.82:51708.service: Deactivated successfully. Jan 30 05:01:37.037225 systemd[1]: Started sshd@4-137.184.189.202:22-147.75.109.163:34328.service - OpenSSH per-connection server daemon (147.75.109.163:34328). Jan 30 05:01:37.099762 sshd[1607]: Accepted publickey for core from 147.75.109.163 port 34328 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:01:37.101858 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:01:37.108054 systemd-logind[1447]: New session 4 of user core. Jan 30 05:01:37.122993 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 05:01:37.187704 sshd[1607]: pam_unix(sshd:session): session closed for user core Jan 30 05:01:37.201023 systemd[1]: sshd@4-137.184.189.202:22-147.75.109.163:34328.service: Deactivated successfully. Jan 30 05:01:37.203346 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 05:01:37.204808 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Jan 30 05:01:37.216277 systemd[1]: Started sshd@5-137.184.189.202:22-147.75.109.163:34332.service - OpenSSH per-connection server daemon (147.75.109.163:34332). Jan 30 05:01:37.219483 systemd-logind[1447]: Removed session 4. Jan 30 05:01:37.261862 sshd[1614]: Accepted publickey for core from 147.75.109.163 port 34332 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:01:37.264028 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:01:37.271105 systemd-logind[1447]: New session 5 of user core. Jan 30 05:01:37.279096 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 05:01:37.338989 sshd[1614]: pam_unix(sshd:session): session closed for user core Jan 30 05:01:37.352615 systemd[1]: sshd@5-137.184.189.202:22-147.75.109.163:34332.service: Deactivated successfully. Jan 30 05:01:37.355063 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 05:01:37.356842 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Jan 30 05:01:37.364279 systemd[1]: Started sshd@6-137.184.189.202:22-147.75.109.163:34344.service - OpenSSH per-connection server daemon (147.75.109.163:34344). Jan 30 05:01:37.367498 systemd-logind[1447]: Removed session 5. Jan 30 05:01:37.411367 sshd[1621]: Accepted publickey for core from 147.75.109.163 port 34344 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:01:37.413411 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:01:37.421075 systemd-logind[1447]: New session 6 of user core. Jan 30 05:01:37.432117 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 05:01:37.495471 sshd[1621]: pam_unix(sshd:session): session closed for user core Jan 30 05:01:37.507042 systemd[1]: sshd@6-137.184.189.202:22-147.75.109.163:34344.service: Deactivated successfully. Jan 30 05:01:37.509096 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 05:01:37.511029 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Jan 30 05:01:37.516106 systemd[1]: Started sshd@7-137.184.189.202:22-147.75.109.163:34350.service - OpenSSH per-connection server daemon (147.75.109.163:34350). Jan 30 05:01:37.517865 systemd-logind[1447]: Removed session 6. Jan 30 05:01:37.561038 sshd[1628]: Accepted publickey for core from 147.75.109.163 port 34350 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:01:37.563278 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:01:37.571356 systemd-logind[1447]: New session 7 of user core. Jan 30 05:01:37.577111 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 05:01:37.654372 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 05:01:37.655706 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:01:38.172165 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 05:01:38.185488 (dockerd)[1646]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 05:01:38.585602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 05:01:38.596915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:01:38.721935 dockerd[1646]: time="2025-01-30T05:01:38.721874631Z" level=info msg="Starting up" Jan 30 05:01:38.793652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:01:38.805319 (kubelet)[1663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:01:38.886289 kubelet[1663]: E0130 05:01:38.885621 1663 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:01:38.892353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:01:38.892561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:01:38.952207 dockerd[1646]: time="2025-01-30T05:01:38.951888216Z" level=info msg="Loading containers: start." Jan 30 05:01:39.124904 kernel: Initializing XFRM netlink socket Jan 30 05:01:39.158481 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Jan 30 05:01:39.878039 systemd-resolved[1325]: Clock change detected. Flushing caches. Jan 30 05:01:39.878737 systemd-timesyncd[1341]: Contacted time server 135.134.111.122:123 (2.flatcar.pool.ntp.org). Jan 30 05:01:39.878818 systemd-timesyncd[1341]: Initial clock synchronization to Thu 2025-01-30 05:01:39.877832 UTC. Jan 30 05:01:39.888452 systemd-networkd[1366]: docker0: Link UP Jan 30 05:01:39.923793 dockerd[1646]: time="2025-01-30T05:01:39.923665330Z" level=info msg="Loading containers: done." Jan 30 05:01:39.956503 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1653363223-merged.mount: Deactivated successfully. Jan 30 05:01:39.959818 dockerd[1646]: time="2025-01-30T05:01:39.959744600Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 05:01:39.960010 dockerd[1646]: time="2025-01-30T05:01:39.959932159Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 05:01:39.960133 dockerd[1646]: time="2025-01-30T05:01:39.960109527Z" level=info msg="Daemon has completed initialization" Jan 30 05:01:40.033837 dockerd[1646]: time="2025-01-30T05:01:40.033671935Z" level=info msg="API listen on /run/docker.sock" Jan 30 05:01:40.034677 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 05:01:41.250266 containerd[1465]: time="2025-01-30T05:01:41.250220610Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 05:01:41.985395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362614513.mount: Deactivated successfully. Jan 30 05:01:43.899317 containerd[1465]: time="2025-01-30T05:01:43.899182106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:43.906616 containerd[1465]: time="2025-01-30T05:01:43.906503278Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 05:01:43.912348 containerd[1465]: time="2025-01-30T05:01:43.912224369Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:43.919774 containerd[1465]: time="2025-01-30T05:01:43.919678663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:43.922656 containerd[1465]: time="2025-01-30T05:01:43.922420971Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.671533956s" Jan 30 05:01:43.922656 containerd[1465]: time="2025-01-30T05:01:43.922490491Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 05:01:43.959791 containerd[1465]: time="2025-01-30T05:01:43.959751250Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 05:01:45.887836 containerd[1465]: time="2025-01-30T05:01:45.887720592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:45.892139 containerd[1465]: time="2025-01-30T05:01:45.892055682Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 05:01:45.895639 containerd[1465]: time="2025-01-30T05:01:45.895536264Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:45.903243 containerd[1465]: time="2025-01-30T05:01:45.903137912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:45.905622 containerd[1465]: time="2025-01-30T05:01:45.905542224Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.945584958s" Jan 30 05:01:45.905622 containerd[1465]: time="2025-01-30T05:01:45.905612081Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 05:01:45.949010 containerd[1465]: time="2025-01-30T05:01:45.948958704Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 05:01:45.956926 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 05:01:47.373839 containerd[1465]: time="2025-01-30T05:01:47.373729615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:47.376820 containerd[1465]: time="2025-01-30T05:01:47.376726258Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 05:01:47.380072 containerd[1465]: time="2025-01-30T05:01:47.379982365Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:47.388172 containerd[1465]: time="2025-01-30T05:01:47.388081041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:47.391453 containerd[1465]: time="2025-01-30T05:01:47.390636690Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.441630441s" Jan 30 05:01:47.391453 containerd[1465]: time="2025-01-30T05:01:47.390698988Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 05:01:47.441646 containerd[1465]: time="2025-01-30T05:01:47.441589756Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 05:01:49.009527 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 05:01:49.071649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705750805.mount: Deactivated successfully. Jan 30 05:01:49.564848 containerd[1465]: time="2025-01-30T05:01:49.564769308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:49.567383 containerd[1465]: time="2025-01-30T05:01:49.567278416Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 05:01:49.569646 containerd[1465]: time="2025-01-30T05:01:49.569560306Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:49.574651 containerd[1465]: time="2025-01-30T05:01:49.574557536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:49.576447 containerd[1465]: time="2025-01-30T05:01:49.575829271Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.134174803s" Jan 30 05:01:49.576447 containerd[1465]: time="2025-01-30T05:01:49.575887313Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 05:01:49.616835 containerd[1465]: time="2025-01-30T05:01:49.616779246Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 05:01:49.734141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 05:01:49.742783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:01:49.949641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:01:49.953184 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:01:50.034333 kubelet[1909]: E0130 05:01:50.034090 1909 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:01:50.037879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:01:50.038109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:01:50.258427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1540149697.mount: Deactivated successfully. Jan 30 05:01:51.662986 containerd[1465]: time="2025-01-30T05:01:51.662887217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:51.666359 containerd[1465]: time="2025-01-30T05:01:51.666270179Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 05:01:51.673458 containerd[1465]: time="2025-01-30T05:01:51.673334423Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:51.686958 containerd[1465]: time="2025-01-30T05:01:51.686795387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:51.689538 containerd[1465]: time="2025-01-30T05:01:51.689476897Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.072635248s" Jan 30 05:01:51.690122 containerd[1465]: time="2025-01-30T05:01:51.689728766Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 05:01:51.730881 containerd[1465]: time="2025-01-30T05:01:51.730702010Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 05:01:52.147392 systemd-resolved[1325]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 30 05:01:52.529012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989826647.mount: Deactivated successfully. Jan 30 05:01:52.578417 containerd[1465]: time="2025-01-30T05:01:52.578318121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:52.591736 containerd[1465]: time="2025-01-30T05:01:52.591624329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 05:01:52.603023 containerd[1465]: time="2025-01-30T05:01:52.602902313Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:52.612557 containerd[1465]: time="2025-01-30T05:01:52.612461650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:52.613981 containerd[1465]: time="2025-01-30T05:01:52.613908019Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 883.142675ms" Jan 30 05:01:52.613981 containerd[1465]: time="2025-01-30T05:01:52.613977336Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 05:01:52.653958 containerd[1465]: time="2025-01-30T05:01:52.653905760Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 05:01:53.667559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627702400.mount: Deactivated successfully. Jan 30 05:01:56.325842 containerd[1465]: time="2025-01-30T05:01:56.325744991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:56.329015 containerd[1465]: time="2025-01-30T05:01:56.328901994Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 05:01:56.333423 containerd[1465]: time="2025-01-30T05:01:56.333328599Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:56.342026 containerd[1465]: time="2025-01-30T05:01:56.341907639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:56.344733 containerd[1465]: time="2025-01-30T05:01:56.344453273Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.690486757s" Jan 30 05:01:56.344733 containerd[1465]: time="2025-01-30T05:01:56.344526086Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 05:01:59.904634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:01:59.915537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:01:59.957490 systemd[1]: Reloading requested from client PID 2085 ('systemctl') (unit session-7.scope)... Jan 30 05:01:59.957520 systemd[1]: Reloading... Jan 30 05:02:00.149507 zram_generator::config[2122]: No configuration found. Jan 30 05:02:00.399044 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:02:00.579929 systemd[1]: Reloading finished in 621 ms. Jan 30 05:02:00.661718 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:02:00.668510 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 05:02:00.668848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:02:00.675940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:02:00.849488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:02:00.866209 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:02:00.940354 kubelet[2180]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:02:00.940354 kubelet[2180]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:02:00.940354 kubelet[2180]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:02:00.942927 kubelet[2180]: I0130 05:02:00.942752 2180 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:02:01.542407 kubelet[2180]: I0130 05:02:01.542327 2180 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 05:02:01.542407 kubelet[2180]: I0130 05:02:01.542369 2180 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:02:01.542937 kubelet[2180]: I0130 05:02:01.542879 2180 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 05:02:01.577169 kubelet[2180]: I0130 05:02:01.576171 2180 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:02:01.578626 kubelet[2180]: E0130 05:02:01.578506 2180 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://137.184.189.202:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:01.598216 kubelet[2180]: I0130 05:02:01.598152 2180 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:02:01.598684 kubelet[2180]: I0130 05:02:01.598581 2180 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:02:01.599997 kubelet[2180]: I0130 05:02:01.598643 2180 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-0-0f8f4a9941","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 05:02:01.602983 kubelet[2180]: I0130 05:02:01.602921 2180 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:02:01.603219 kubelet[2180]: I0130 05:02:01.603204 2180 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 05:02:01.605111 kubelet[2180]: I0130 05:02:01.605064 2180 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:02:01.607997 kubelet[2180]: I0130 05:02:01.607946 2180 kubelet.go:400] "Attempting to sync node with API server" Jan 30 05:02:01.608663 kubelet[2180]: I0130 05:02:01.608186 2180 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:02:01.608663 kubelet[2180]: I0130 05:02:01.608236 2180 kubelet.go:312] "Adding apiserver pod source" Jan 30 05:02:01.608663 kubelet[2180]: I0130 05:02:01.608274 2180 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:02:01.613632 kubelet[2180]: W0130 05:02:01.613549 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.189.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:01.614555 kubelet[2180]: E0130 05:02:01.614050 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://137.184.189.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:01.614555 kubelet[2180]: I0130 05:02:01.614195 2180 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 05:02:01.617331 kubelet[2180]: I0130 05:02:01.616690 2180 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:02:01.617331 kubelet[2180]: W0130 05:02:01.616823 2180 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 05:02:01.618490 kubelet[2180]: I0130 05:02:01.618458 2180 server.go:1264] "Started kubelet" Jan 30 05:02:01.630174 kubelet[2180]: W0130 05:02:01.630055 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.189.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-0-0f8f4a9941&limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:01.630506 kubelet[2180]: E0130 05:02:01.630486 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://137.184.189.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-0-0f8f4a9941&limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:01.630699 kubelet[2180]: I0130 05:02:01.630632 2180 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:02:01.634437 kubelet[2180]: I0130 05:02:01.634343 2180 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:02:01.635182 kubelet[2180]: I0130 05:02:01.635141 2180 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:02:01.636320 kubelet[2180]: I0130 05:02:01.636271 2180 server.go:455] "Adding debug handlers to kubelet server" Jan 30 05:02:01.640277 kubelet[2180]: E0130 05:02:01.637793 2180 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.189.202:6443/api/v1/namespaces/default/events\": dial tcp 137.184.189.202:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-0-0f8f4a9941.181f5fce7aeacf14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-0-0f8f4a9941,UID:ci-4081.3.0-0-0f8f4a9941,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-0-0f8f4a9941,},FirstTimestamp:2025-01-30 05:02:01.618411284 +0000 UTC m=+0.746050611,LastTimestamp:2025-01-30 05:02:01.618411284 +0000 UTC m=+0.746050611,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-0-0f8f4a9941,}" Jan 30 05:02:01.640560 kubelet[2180]: I0130 05:02:01.640332 2180 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:02:01.646372 kubelet[2180]: I0130 05:02:01.644856 2180 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 05:02:01.646372 kubelet[2180]: I0130 05:02:01.645795 2180 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:02:01.646372 kubelet[2180]: I0130 05:02:01.645946 2180 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:02:01.649017 kubelet[2180]: W0130 05:02:01.646948 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.189.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:01.652077 kubelet[2180]: E0130 05:02:01.651642 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://137.184.189.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:01.652077 kubelet[2180]: I0130 05:02:01.649462 2180 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:02:01.652077 kubelet[2180]: I0130 05:02:01.651829 2180 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:02:01.652561 kubelet[2180]: E0130 05:02:01.647995 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.189.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-0-0f8f4a9941?timeout=10s\": dial tcp 137.184.189.202:6443: connect: connection refused" interval="200ms" Jan 30 05:02:01.655575 kubelet[2180]: I0130 05:02:01.654907 2180 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:02:01.660130 kubelet[2180]: E0130 05:02:01.659621 2180 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 05:02:01.679511 kubelet[2180]: I0130 05:02:01.679475 2180 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 05:02:01.679511 kubelet[2180]: I0130 05:02:01.679513 2180 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 05:02:01.679779 kubelet[2180]: I0130 05:02:01.679546 2180 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:02:01.690876 kubelet[2180]: I0130 05:02:01.690478 2180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:02:01.694745 kubelet[2180]: I0130 05:02:01.694705 2180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:02:01.694979 kubelet[2180]: I0130 05:02:01.694965 2180 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:02:01.695261 kubelet[2180]: I0130 05:02:01.695247 2180 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 05:02:01.695798 kubelet[2180]: E0130 05:02:01.695442 2180 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:02:01.699215 kubelet[2180]: I0130 05:02:01.699176 2180 policy_none.go:49] "None policy: Start" Jan 30 05:02:01.700850 kubelet[2180]: W0130 05:02:01.700785 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.189.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:01.701188 kubelet[2180]: E0130 05:02:01.701165 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://137.184.189.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:01.702552 kubelet[2180]: I0130 05:02:01.702000 2180 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:02:01.702552 kubelet[2180]: I0130 05:02:01.702043 2180 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:02:01.723547 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 05:02:01.741344 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 05:02:01.748862 kubelet[2180]: I0130 05:02:01.748804 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.751146 kubelet[2180]: E0130 05:02:01.750064 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.189.202:6443/api/v1/nodes\": dial tcp 137.184.189.202:6443: connect: connection refused" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.760251 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 05:02:01.763442 kubelet[2180]: I0130 05:02:01.762557 2180 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:02:01.763442 kubelet[2180]: I0130 05:02:01.763024 2180 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:02:01.763442 kubelet[2180]: I0130 05:02:01.763192 2180 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:02:01.766882 kubelet[2180]: E0130 05:02:01.766840 2180 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-0-0f8f4a9941\" not found" Jan 30 05:02:01.797979 kubelet[2180]: I0130 05:02:01.796252 2180 topology_manager.go:215] "Topology Admit Handler" podUID="bd84838bdbadc395ab90facdc1fec145" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.800252 kubelet[2180]: I0130 05:02:01.800203 2180 topology_manager.go:215] "Topology Admit Handler" podUID="c3e050e4290a1912f9fb334a6e9a6bc0" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.803522 kubelet[2180]: I0130 05:02:01.802575 2180 topology_manager.go:215] "Topology Admit Handler" podUID="001ad993ae3cf4744955744d289caf24" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.816361 systemd[1]: Created slice kubepods-burstable-podbd84838bdbadc395ab90facdc1fec145.slice - libcontainer container kubepods-burstable-podbd84838bdbadc395ab90facdc1fec145.slice. Jan 30 05:02:01.846535 systemd[1]: Created slice kubepods-burstable-podc3e050e4290a1912f9fb334a6e9a6bc0.slice - libcontainer container kubepods-burstable-podc3e050e4290a1912f9fb334a6e9a6bc0.slice. Jan 30 05:02:01.848424 kubelet[2180]: I0130 05:02:01.848369 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3e050e4290a1912f9fb334a6e9a6bc0-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-0-0f8f4a9941\" (UID: \"c3e050e4290a1912f9fb334a6e9a6bc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.849178 kubelet[2180]: I0130 05:02:01.848424 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd84838bdbadc395ab90facdc1fec145-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-0-0f8f4a9941\" (UID: \"bd84838bdbadc395ab90facdc1fec145\") " pod="kube-system/kube-apiserver-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.849178 kubelet[2180]: I0130 05:02:01.848459 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd84838bdbadc395ab90facdc1fec145-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-0-0f8f4a9941\" (UID: \"bd84838bdbadc395ab90facdc1fec145\") " pod="kube-system/kube-apiserver-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.849178 kubelet[2180]: I0130 05:02:01.848489 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd84838bdbadc395ab90facdc1fec145-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-0-0f8f4a9941\" (UID: \"bd84838bdbadc395ab90facdc1fec145\") " pod="kube-system/kube-apiserver-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.849178 kubelet[2180]: I0130 05:02:01.848518 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c3e050e4290a1912f9fb334a6e9a6bc0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-0-0f8f4a9941\" (UID: \"c3e050e4290a1912f9fb334a6e9a6bc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.849178 kubelet[2180]: I0130 05:02:01.848541 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3e050e4290a1912f9fb334a6e9a6bc0-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-0-0f8f4a9941\" (UID: \"c3e050e4290a1912f9fb334a6e9a6bc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.849740 kubelet[2180]: I0130 05:02:01.848564 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3e050e4290a1912f9fb334a6e9a6bc0-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-0-0f8f4a9941\" (UID: \"c3e050e4290a1912f9fb334a6e9a6bc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.849740 kubelet[2180]: I0130 05:02:01.848590 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3e050e4290a1912f9fb334a6e9a6bc0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-0-0f8f4a9941\" (UID: \"c3e050e4290a1912f9fb334a6e9a6bc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.849740 kubelet[2180]: I0130 05:02:01.848616 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/001ad993ae3cf4744955744d289caf24-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-0-0f8f4a9941\" (UID: \"001ad993ae3cf4744955744d289caf24\") " pod="kube-system/kube-scheduler-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.854451 kubelet[2180]: E0130 05:02:01.854229 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.189.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-0-0f8f4a9941?timeout=10s\": dial tcp 137.184.189.202:6443: connect: connection refused" interval="400ms" Jan 30 05:02:01.855730 systemd[1]: Created slice kubepods-burstable-pod001ad993ae3cf4744955744d289caf24.slice - libcontainer container kubepods-burstable-pod001ad993ae3cf4744955744d289caf24.slice. Jan 30 05:02:01.951651 kubelet[2180]: I0130 05:02:01.951614 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:01.952966 kubelet[2180]: E0130 05:02:01.952893 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.189.202:6443/api/v1/nodes\": dial tcp 137.184.189.202:6443: connect: connection refused" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:02.139777 kubelet[2180]: E0130 05:02:02.139089 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:02.140137 containerd[1465]: time="2025-01-30T05:02:02.140092378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-0-0f8f4a9941,Uid:bd84838bdbadc395ab90facdc1fec145,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:02.155700 kubelet[2180]: E0130 05:02:02.155601 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:02.160399 kubelet[2180]: E0130 05:02:02.159656 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:02.163401 containerd[1465]: time="2025-01-30T05:02:02.163330513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-0-0f8f4a9941,Uid:c3e050e4290a1912f9fb334a6e9a6bc0,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:02.165342 containerd[1465]: time="2025-01-30T05:02:02.165046377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-0-0f8f4a9941,Uid:001ad993ae3cf4744955744d289caf24,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:02.255037 kubelet[2180]: E0130 05:02:02.254941 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.189.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-0-0f8f4a9941?timeout=10s\": dial tcp 137.184.189.202:6443: connect: connection refused" interval="800ms" Jan 30 05:02:02.355081 kubelet[2180]: I0130 05:02:02.355015 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:02.355857 kubelet[2180]: E0130 05:02:02.355470 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.189.202:6443/api/v1/nodes\": dial tcp 137.184.189.202:6443: connect: connection refused" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:02.484699 kubelet[2180]: W0130 05:02:02.484593 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.189.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:02.484699 kubelet[2180]: E0130 05:02:02.484702 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://137.184.189.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:02.680535 kubelet[2180]: W0130 05:02:02.680400 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.189.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:02.680535 kubelet[2180]: E0130 05:02:02.680502 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://137.184.189.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:02.922545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416932217.mount: Deactivated successfully. Jan 30 05:02:03.001911 containerd[1465]: time="2025-01-30T05:02:03.001816717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:02:03.007679 containerd[1465]: time="2025-01-30T05:02:03.007568623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 05:02:03.016341 containerd[1465]: time="2025-01-30T05:02:03.014476997Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:02:03.025035 containerd[1465]: time="2025-01-30T05:02:03.024948260Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:02:03.028400 containerd[1465]: time="2025-01-30T05:02:03.028330733Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:02:03.030887 containerd[1465]: time="2025-01-30T05:02:03.030643902Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:02:03.036264 containerd[1465]: time="2025-01-30T05:02:03.036160929Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:02:03.042043 containerd[1465]: time="2025-01-30T05:02:03.041958027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:02:03.044339 containerd[1465]: time="2025-01-30T05:02:03.044245916Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 904.043827ms" Jan 30 05:02:03.048853 containerd[1465]: time="2025-01-30T05:02:03.048762391Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 883.614158ms" Jan 30 05:02:03.051898 kubelet[2180]: W0130 05:02:03.051735 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.189.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-0-0f8f4a9941&limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:03.051898 kubelet[2180]: E0130 05:02:03.051847 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://137.184.189.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-0-0f8f4a9941&limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:03.053431 containerd[1465]: time="2025-01-30T05:02:03.052712086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 889.259341ms" Jan 30 05:02:03.056729 kubelet[2180]: E0130 05:02:03.056654 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.189.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-0-0f8f4a9941?timeout=10s\": dial tcp 137.184.189.202:6443: connect: connection refused" interval="1.6s" Jan 30 05:02:03.157253 kubelet[2180]: I0130 05:02:03.157172 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:03.158051 kubelet[2180]: E0130 05:02:03.157602 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.189.202:6443/api/v1/nodes\": dial tcp 137.184.189.202:6443: connect: connection refused" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:03.169671 kubelet[2180]: W0130 05:02:03.169568 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.189.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:03.169885 kubelet[2180]: E0130 05:02:03.169695 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://137.184.189.202:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:03.408760 containerd[1465]: time="2025-01-30T05:02:03.408364529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:03.408760 containerd[1465]: time="2025-01-30T05:02:03.408433131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:03.408760 containerd[1465]: time="2025-01-30T05:02:03.408470697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:03.408760 containerd[1465]: time="2025-01-30T05:02:03.408591792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:03.414142 containerd[1465]: time="2025-01-30T05:02:03.413747062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:03.414142 containerd[1465]: time="2025-01-30T05:02:03.413857175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:03.414142 containerd[1465]: time="2025-01-30T05:02:03.413880362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:03.414142 containerd[1465]: time="2025-01-30T05:02:03.413993259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:03.422714 containerd[1465]: time="2025-01-30T05:02:03.422232684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:03.423960 containerd[1465]: time="2025-01-30T05:02:03.422373256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:03.423960 containerd[1465]: time="2025-01-30T05:02:03.423858839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:03.424450 containerd[1465]: time="2025-01-30T05:02:03.424168966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:03.457626 systemd[1]: Started cri-containerd-0486f244dfe40b7578b596a7f31cc65eaa89ce3e2f245cfda3d2e4e6533e23ce.scope - libcontainer container 0486f244dfe40b7578b596a7f31cc65eaa89ce3e2f245cfda3d2e4e6533e23ce. Jan 30 05:02:03.459927 systemd[1]: Started cri-containerd-c085a62cb55cb19e060ddfb00633991f67b88aac22628a4eeafca48dfcf499e4.scope - libcontainer container c085a62cb55cb19e060ddfb00633991f67b88aac22628a4eeafca48dfcf499e4. Jan 30 05:02:03.468562 systemd[1]: Started cri-containerd-d6a663282942c453313b3177b44414f080dd6a40a11bf257cd9ba113c2db62d1.scope - libcontainer container d6a663282942c453313b3177b44414f080dd6a40a11bf257cd9ba113c2db62d1. Jan 30 05:02:03.575213 containerd[1465]: time="2025-01-30T05:02:03.574703474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-0-0f8f4a9941,Uid:bd84838bdbadc395ab90facdc1fec145,Namespace:kube-system,Attempt:0,} returns sandbox id \"c085a62cb55cb19e060ddfb00633991f67b88aac22628a4eeafca48dfcf499e4\"" Jan 30 05:02:03.576230 containerd[1465]: time="2025-01-30T05:02:03.576160108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-0-0f8f4a9941,Uid:c3e050e4290a1912f9fb334a6e9a6bc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0486f244dfe40b7578b596a7f31cc65eaa89ce3e2f245cfda3d2e4e6533e23ce\"" Jan 30 05:02:03.577112 kubelet[2180]: E0130 05:02:03.576578 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:03.578237 kubelet[2180]: E0130 05:02:03.577748 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:03.585701 containerd[1465]: time="2025-01-30T05:02:03.585444462Z" level=info msg="CreateContainer within sandbox \"0486f244dfe40b7578b596a7f31cc65eaa89ce3e2f245cfda3d2e4e6533e23ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 05:02:03.586360 containerd[1465]: time="2025-01-30T05:02:03.586197737Z" level=info msg="CreateContainer within sandbox \"c085a62cb55cb19e060ddfb00633991f67b88aac22628a4eeafca48dfcf499e4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 05:02:03.602596 containerd[1465]: time="2025-01-30T05:02:03.602537688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-0-0f8f4a9941,Uid:001ad993ae3cf4744955744d289caf24,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6a663282942c453313b3177b44414f080dd6a40a11bf257cd9ba113c2db62d1\"" Jan 30 05:02:03.603580 kubelet[2180]: E0130 05:02:03.603552 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:03.606308 containerd[1465]: time="2025-01-30T05:02:03.606101532Z" level=info msg="CreateContainer within sandbox \"d6a663282942c453313b3177b44414f080dd6a40a11bf257cd9ba113c2db62d1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 05:02:03.687678 containerd[1465]: time="2025-01-30T05:02:03.687540242Z" level=info msg="CreateContainer within sandbox \"c085a62cb55cb19e060ddfb00633991f67b88aac22628a4eeafca48dfcf499e4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0c723a70899c576f0808c53c1cd9b068a517e7ac02f22aa327c3d01c1dc5c730\"" Jan 30 05:02:03.689834 containerd[1465]: time="2025-01-30T05:02:03.689787661Z" level=info msg="StartContainer for \"0c723a70899c576f0808c53c1cd9b068a517e7ac02f22aa327c3d01c1dc5c730\"" Jan 30 05:02:03.697707 containerd[1465]: time="2025-01-30T05:02:03.697539855Z" level=info msg="CreateContainer within sandbox \"0486f244dfe40b7578b596a7f31cc65eaa89ce3e2f245cfda3d2e4e6533e23ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aeb034af53b45bd946426141451a8c619eecfe38a2a99f8b5466a66316d8f37a\"" Jan 30 05:02:03.698812 containerd[1465]: time="2025-01-30T05:02:03.698764982Z" level=info msg="StartContainer for \"aeb034af53b45bd946426141451a8c619eecfe38a2a99f8b5466a66316d8f37a\"" Jan 30 05:02:03.726782 containerd[1465]: time="2025-01-30T05:02:03.726553847Z" level=info msg="CreateContainer within sandbox \"d6a663282942c453313b3177b44414f080dd6a40a11bf257cd9ba113c2db62d1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"adde87ea645b54d52f2e435cff3edb46bb1bc9b5058caf349b19246e8e0fc919\"" Jan 30 05:02:03.728372 containerd[1465]: time="2025-01-30T05:02:03.727592456Z" level=info msg="StartContainer for \"adde87ea645b54d52f2e435cff3edb46bb1bc9b5058caf349b19246e8e0fc919\"" Jan 30 05:02:03.730932 kubelet[2180]: E0130 05:02:03.730896 2180 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://137.184.189.202:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 137.184.189.202:6443: connect: connection refused Jan 30 05:02:03.755287 systemd[1]: Started cri-containerd-aeb034af53b45bd946426141451a8c619eecfe38a2a99f8b5466a66316d8f37a.scope - libcontainer container aeb034af53b45bd946426141451a8c619eecfe38a2a99f8b5466a66316d8f37a. Jan 30 05:02:03.770931 systemd[1]: Started cri-containerd-0c723a70899c576f0808c53c1cd9b068a517e7ac02f22aa327c3d01c1dc5c730.scope - libcontainer container 0c723a70899c576f0808c53c1cd9b068a517e7ac02f22aa327c3d01c1dc5c730. Jan 30 05:02:03.811911 systemd[1]: Started cri-containerd-adde87ea645b54d52f2e435cff3edb46bb1bc9b5058caf349b19246e8e0fc919.scope - libcontainer container adde87ea645b54d52f2e435cff3edb46bb1bc9b5058caf349b19246e8e0fc919. Jan 30 05:02:03.904722 containerd[1465]: time="2025-01-30T05:02:03.904255488Z" level=info msg="StartContainer for \"aeb034af53b45bd946426141451a8c619eecfe38a2a99f8b5466a66316d8f37a\" returns successfully" Jan 30 05:02:03.904722 containerd[1465]: time="2025-01-30T05:02:03.904425128Z" level=info msg="StartContainer for \"0c723a70899c576f0808c53c1cd9b068a517e7ac02f22aa327c3d01c1dc5c730\" returns successfully" Jan 30 05:02:03.950642 containerd[1465]: time="2025-01-30T05:02:03.950487594Z" level=info msg="StartContainer for \"adde87ea645b54d52f2e435cff3edb46bb1bc9b5058caf349b19246e8e0fc919\" returns successfully" Jan 30 05:02:04.657666 kubelet[2180]: E0130 05:02:04.657605 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.189.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-0-0f8f4a9941?timeout=10s\": dial tcp 137.184.189.202:6443: connect: connection refused" interval="3.2s" Jan 30 05:02:04.728580 kubelet[2180]: E0130 05:02:04.727994 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:04.739061 kubelet[2180]: E0130 05:02:04.738986 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:04.764385 kubelet[2180]: I0130 05:02:04.763091 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:04.765451 kubelet[2180]: E0130 05:02:04.765423 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:04.765795 kubelet[2180]: E0130 05:02:04.765758 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.189.202:6443/api/v1/nodes\": dial tcp 137.184.189.202:6443: connect: connection refused" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:05.763438 kubelet[2180]: E0130 05:02:05.763396 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:05.764658 kubelet[2180]: E0130 05:02:05.764565 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:05.765369 kubelet[2180]: E0130 05:02:05.765252 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:06.768761 kubelet[2180]: E0130 05:02:06.768720 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:06.870962 kubelet[2180]: E0130 05:02:06.870913 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:07.534879 kubelet[2180]: E0130 05:02:07.534826 2180 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.0-0-0f8f4a9941" not found Jan 30 05:02:07.617173 kubelet[2180]: I0130 05:02:07.616994 2180 apiserver.go:52] "Watching apiserver" Jan 30 05:02:07.646177 kubelet[2180]: I0130 05:02:07.646102 2180 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:02:07.863701 kubelet[2180]: E0130 05:02:07.863505 2180 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-0-0f8f4a9941\" not found" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:07.896898 kubelet[2180]: E0130 05:02:07.896831 2180 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.0-0-0f8f4a9941" not found Jan 30 05:02:07.967531 kubelet[2180]: I0130 05:02:07.967457 2180 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:07.979978 kubelet[2180]: I0130 05:02:07.979729 2180 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:09.320061 systemd[1]: Reloading requested from client PID 2457 ('systemctl') (unit session-7.scope)... Jan 30 05:02:09.320651 systemd[1]: Reloading... Jan 30 05:02:09.460336 zram_generator::config[2495]: No configuration found. Jan 30 05:02:09.708587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:02:09.943892 systemd[1]: Reloading finished in 622 ms. Jan 30 05:02:10.018835 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:02:10.019569 kubelet[2180]: I0130 05:02:10.019147 2180 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:02:10.033114 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 05:02:10.033670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:02:10.033920 systemd[1]: kubelet.service: Consumed 1.277s CPU time, 111.5M memory peak, 0B memory swap peak. Jan 30 05:02:10.046975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:02:10.313667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:02:10.327107 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:02:10.458321 kubelet[2546]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:02:10.458321 kubelet[2546]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:02:10.458321 kubelet[2546]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:02:10.458321 kubelet[2546]: I0130 05:02:10.457975 2546 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:02:10.470281 kubelet[2546]: I0130 05:02:10.470230 2546 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 05:02:10.470281 kubelet[2546]: I0130 05:02:10.470271 2546 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:02:10.471067 kubelet[2546]: I0130 05:02:10.470858 2546 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 05:02:10.476278 kubelet[2546]: I0130 05:02:10.476213 2546 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 05:02:10.481361 kubelet[2546]: I0130 05:02:10.480175 2546 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:02:10.493837 kubelet[2546]: I0130 05:02:10.493785 2546 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:02:10.494446 kubelet[2546]: I0130 05:02:10.494372 2546 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:02:10.494907 kubelet[2546]: I0130 05:02:10.494594 2546 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-0-0f8f4a9941","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 05:02:10.495127 kubelet[2546]: I0130 05:02:10.495112 2546 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:02:10.495220 kubelet[2546]: I0130 05:02:10.495211 2546 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 05:02:10.495955 kubelet[2546]: I0130 05:02:10.495652 2546 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:02:10.495955 kubelet[2546]: I0130 05:02:10.495814 2546 kubelet.go:400] "Attempting to sync node with API server" Jan 30 05:02:10.495955 kubelet[2546]: I0130 05:02:10.495832 2546 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:02:10.495955 kubelet[2546]: I0130 05:02:10.495870 2546 kubelet.go:312] "Adding apiserver pod source" Jan 30 05:02:10.495955 kubelet[2546]: I0130 05:02:10.495888 2546 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:02:10.500827 kubelet[2546]: I0130 05:02:10.500554 2546 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 05:02:10.503742 kubelet[2546]: I0130 05:02:10.503409 2546 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:02:10.505100 kubelet[2546]: I0130 05:02:10.504728 2546 server.go:1264] "Started kubelet" Jan 30 05:02:10.516216 kubelet[2546]: I0130 05:02:10.511682 2546 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:02:10.516216 kubelet[2546]: I0130 05:02:10.512417 2546 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:02:10.516216 kubelet[2546]: I0130 05:02:10.512834 2546 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:02:10.516216 kubelet[2546]: I0130 05:02:10.515253 2546 server.go:455] "Adding debug handlers to kubelet server" Jan 30 05:02:10.533687 kubelet[2546]: I0130 05:02:10.531592 2546 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:02:10.545058 kubelet[2546]: I0130 05:02:10.545023 2546 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 05:02:10.545806 kubelet[2546]: I0130 05:02:10.545773 2546 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:02:10.546120 kubelet[2546]: I0130 05:02:10.546106 2546 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:02:10.553861 kubelet[2546]: I0130 05:02:10.553823 2546 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:02:10.554914 kubelet[2546]: I0130 05:02:10.553988 2546 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:02:10.566636 kubelet[2546]: E0130 05:02:10.566553 2546 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 05:02:10.580403 kubelet[2546]: I0130 05:02:10.575855 2546 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:02:10.604163 kubelet[2546]: I0130 05:02:10.604037 2546 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:02:10.615444 kubelet[2546]: I0130 05:02:10.615278 2546 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:02:10.615625 kubelet[2546]: I0130 05:02:10.615465 2546 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:02:10.615862 kubelet[2546]: I0130 05:02:10.615840 2546 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 05:02:10.619553 kubelet[2546]: E0130 05:02:10.619462 2546 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:02:10.647441 kubelet[2546]: I0130 05:02:10.646976 2546 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.667826 kubelet[2546]: I0130 05:02:10.667702 2546 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.667826 kubelet[2546]: I0130 05:02:10.667821 2546 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.673996 kubelet[2546]: I0130 05:02:10.673250 2546 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 05:02:10.673996 kubelet[2546]: I0130 05:02:10.673278 2546 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 05:02:10.673996 kubelet[2546]: I0130 05:02:10.673327 2546 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:02:10.673996 kubelet[2546]: I0130 05:02:10.673555 2546 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 05:02:10.673996 kubelet[2546]: I0130 05:02:10.673571 2546 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 05:02:10.673996 kubelet[2546]: I0130 05:02:10.673597 2546 policy_none.go:49] "None policy: Start" Jan 30 05:02:10.675578 kubelet[2546]: I0130 05:02:10.675543 2546 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:02:10.675726 kubelet[2546]: I0130 05:02:10.675591 2546 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:02:10.677112 kubelet[2546]: I0130 05:02:10.677069 2546 state_mem.go:75] "Updated machine memory state" Jan 30 05:02:10.692070 kubelet[2546]: I0130 05:02:10.691384 2546 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:02:10.692070 kubelet[2546]: I0130 05:02:10.691697 2546 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:02:10.692070 kubelet[2546]: I0130 05:02:10.691992 2546 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:02:10.720732 kubelet[2546]: I0130 05:02:10.720663 2546 topology_manager.go:215] "Topology Admit Handler" podUID="bd84838bdbadc395ab90facdc1fec145" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.720732 kubelet[2546]: I0130 05:02:10.720818 2546 topology_manager.go:215] "Topology Admit Handler" podUID="c3e050e4290a1912f9fb334a6e9a6bc0" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.721049 kubelet[2546]: I0130 05:02:10.720917 2546 topology_manager.go:215] "Topology Admit Handler" podUID="001ad993ae3cf4744955744d289caf24" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.749697 kubelet[2546]: W0130 05:02:10.749636 2546 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:02:10.754537 kubelet[2546]: W0130 05:02:10.754471 2546 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:02:10.754729 kubelet[2546]: W0130 05:02:10.754589 2546 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:02:10.850639 kubelet[2546]: I0130 05:02:10.850237 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c3e050e4290a1912f9fb334a6e9a6bc0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-0-0f8f4a9941\" (UID: \"c3e050e4290a1912f9fb334a6e9a6bc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.850639 kubelet[2546]: I0130 05:02:10.850310 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3e050e4290a1912f9fb334a6e9a6bc0-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-0-0f8f4a9941\" (UID: \"c3e050e4290a1912f9fb334a6e9a6bc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.850639 kubelet[2546]: I0130 05:02:10.850343 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3e050e4290a1912f9fb334a6e9a6bc0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-0-0f8f4a9941\" (UID: \"c3e050e4290a1912f9fb334a6e9a6bc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.850639 kubelet[2546]: I0130 05:02:10.850370 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd84838bdbadc395ab90facdc1fec145-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-0-0f8f4a9941\" (UID: \"bd84838bdbadc395ab90facdc1fec145\") " pod="kube-system/kube-apiserver-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.850639 kubelet[2546]: I0130 05:02:10.850394 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd84838bdbadc395ab90facdc1fec145-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-0-0f8f4a9941\" (UID: \"bd84838bdbadc395ab90facdc1fec145\") " pod="kube-system/kube-apiserver-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.851244 kubelet[2546]: I0130 05:02:10.850522 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd84838bdbadc395ab90facdc1fec145-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-0-0f8f4a9941\" (UID: \"bd84838bdbadc395ab90facdc1fec145\") " pod="kube-system/kube-apiserver-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.851244 kubelet[2546]: I0130 05:02:10.850550 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3e050e4290a1912f9fb334a6e9a6bc0-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-0-0f8f4a9941\" (UID: \"c3e050e4290a1912f9fb334a6e9a6bc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.851244 kubelet[2546]: I0130 05:02:10.850575 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3e050e4290a1912f9fb334a6e9a6bc0-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-0-0f8f4a9941\" (UID: \"c3e050e4290a1912f9fb334a6e9a6bc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.851244 kubelet[2546]: I0130 05:02:10.850601 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/001ad993ae3cf4744955744d289caf24-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-0-0f8f4a9941\" (UID: \"001ad993ae3cf4744955744d289caf24\") " pod="kube-system/kube-scheduler-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:10.975312 update_engine[1448]: I20250130 05:02:10.975167 1448 update_attempter.cc:509] Updating boot flags... Jan 30 05:02:11.039453 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2595) Jan 30 05:02:11.051692 kubelet[2546]: E0130 05:02:11.051643 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:11.056797 kubelet[2546]: E0130 05:02:11.056562 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:11.065462 kubelet[2546]: E0130 05:02:11.064221 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:11.172531 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2596) Jan 30 05:02:11.498527 kubelet[2546]: I0130 05:02:11.497367 2546 apiserver.go:52] "Watching apiserver" Jan 30 05:02:11.546565 kubelet[2546]: I0130 05:02:11.546499 2546 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:02:11.648974 kubelet[2546]: E0130 05:02:11.647577 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:11.648974 kubelet[2546]: E0130 05:02:11.648711 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:11.659801 kubelet[2546]: W0130 05:02:11.659744 2546 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:02:11.660008 kubelet[2546]: E0130 05:02:11.659834 2546 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-0-0f8f4a9941\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" Jan 30 05:02:11.661314 kubelet[2546]: E0130 05:02:11.660533 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:11.733140 kubelet[2546]: I0130 05:02:11.732911 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-0-0f8f4a9941" podStartSLOduration=1.7328827169999999 podStartE2EDuration="1.732882717s" podCreationTimestamp="2025-01-30 05:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:02:11.704801558 +0000 UTC m=+1.359086490" watchObservedRunningTime="2025-01-30 05:02:11.732882717 +0000 UTC m=+1.387167647" Jan 30 05:02:11.779395 kubelet[2546]: I0130 05:02:11.779204 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-0-0f8f4a9941" podStartSLOduration=1.779172661 podStartE2EDuration="1.779172661s" podCreationTimestamp="2025-01-30 05:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:02:11.733627527 +0000 UTC m=+1.387912459" watchObservedRunningTime="2025-01-30 05:02:11.779172661 +0000 UTC m=+1.433457595" Jan 30 05:02:12.359934 sudo[1631]: pam_unix(sudo:session): session closed for user root Jan 30 05:02:12.368218 sshd[1628]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:12.373688 systemd[1]: sshd@7-137.184.189.202:22-147.75.109.163:34350.service: Deactivated successfully. Jan 30 05:02:12.378446 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 05:02:12.378949 systemd[1]: session-7.scope: Consumed 5.609s CPU time, 191.5M memory peak, 0B memory swap peak. Jan 30 05:02:12.382259 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Jan 30 05:02:12.385949 systemd-logind[1447]: Removed session 7. Jan 30 05:02:12.650722 kubelet[2546]: E0130 05:02:12.649696 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:12.652391 kubelet[2546]: E0130 05:02:12.652138 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:17.260324 kubelet[2546]: E0130 05:02:17.260020 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:17.280807 kubelet[2546]: I0130 05:02:17.280601 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-0-0f8f4a9941" podStartSLOduration=7.280543639 podStartE2EDuration="7.280543639s" podCreationTimestamp="2025-01-30 05:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:02:11.782157129 +0000 UTC m=+1.436442062" watchObservedRunningTime="2025-01-30 05:02:17.280543639 +0000 UTC m=+6.934828574" Jan 30 05:02:17.659521 kubelet[2546]: E0130 05:02:17.658932 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:18.313597 kubelet[2546]: E0130 05:02:18.313552 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:18.661837 kubelet[2546]: E0130 05:02:18.661122 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:18.663826 kubelet[2546]: E0130 05:02:18.663759 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:21.296866 kubelet[2546]: E0130 05:02:21.296611 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:24.816693 kubelet[2546]: I0130 05:02:24.816651 2546 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 05:02:24.819002 containerd[1465]: time="2025-01-30T05:02:24.817989851Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 05:02:24.819730 kubelet[2546]: I0130 05:02:24.818332 2546 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 05:02:25.559054 kubelet[2546]: I0130 05:02:25.558410 2546 topology_manager.go:215] "Topology Admit Handler" podUID="9f36fbdb-c93d-4bbe-94a9-b3aec936dc92" podNamespace="kube-system" podName="kube-proxy-88875" Jan 30 05:02:25.571336 kubelet[2546]: I0130 05:02:25.570645 2546 topology_manager.go:215] "Topology Admit Handler" podUID="711d6d61-5058-41f1-9b71-8c282678a441" podNamespace="kube-flannel" podName="kube-flannel-ds-lqgtn" Jan 30 05:02:25.575625 systemd[1]: Created slice kubepods-besteffort-pod9f36fbdb_c93d_4bbe_94a9_b3aec936dc92.slice - libcontainer container kubepods-besteffort-pod9f36fbdb_c93d_4bbe_94a9_b3aec936dc92.slice. Jan 30 05:02:25.592150 systemd[1]: Created slice kubepods-burstable-pod711d6d61_5058_41f1_9b71_8c282678a441.slice - libcontainer container kubepods-burstable-pod711d6d61_5058_41f1_9b71_8c282678a441.slice. Jan 30 05:02:25.648187 kubelet[2546]: I0130 05:02:25.648125 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f36fbdb-c93d-4bbe-94a9-b3aec936dc92-lib-modules\") pod \"kube-proxy-88875\" (UID: \"9f36fbdb-c93d-4bbe-94a9-b3aec936dc92\") " pod="kube-system/kube-proxy-88875" Jan 30 05:02:25.648187 kubelet[2546]: I0130 05:02:25.648174 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/711d6d61-5058-41f1-9b71-8c282678a441-cni-plugin\") pod \"kube-flannel-ds-lqgtn\" (UID: \"711d6d61-5058-41f1-9b71-8c282678a441\") " pod="kube-flannel/kube-flannel-ds-lqgtn" Jan 30 05:02:25.648187 kubelet[2546]: I0130 05:02:25.648198 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/711d6d61-5058-41f1-9b71-8c282678a441-flannel-cfg\") pod \"kube-flannel-ds-lqgtn\" (UID: \"711d6d61-5058-41f1-9b71-8c282678a441\") " pod="kube-flannel/kube-flannel-ds-lqgtn" Jan 30 05:02:25.648573 kubelet[2546]: I0130 05:02:25.648216 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f36fbdb-c93d-4bbe-94a9-b3aec936dc92-xtables-lock\") pod \"kube-proxy-88875\" (UID: \"9f36fbdb-c93d-4bbe-94a9-b3aec936dc92\") " pod="kube-system/kube-proxy-88875" Jan 30 05:02:25.648573 kubelet[2546]: I0130 05:02:25.648233 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26r69\" (UniqueName: \"kubernetes.io/projected/9f36fbdb-c93d-4bbe-94a9-b3aec936dc92-kube-api-access-26r69\") pod \"kube-proxy-88875\" (UID: \"9f36fbdb-c93d-4bbe-94a9-b3aec936dc92\") " pod="kube-system/kube-proxy-88875" Jan 30 05:02:25.648573 kubelet[2546]: I0130 05:02:25.648251 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/711d6d61-5058-41f1-9b71-8c282678a441-cni\") pod \"kube-flannel-ds-lqgtn\" (UID: \"711d6d61-5058-41f1-9b71-8c282678a441\") " pod="kube-flannel/kube-flannel-ds-lqgtn" Jan 30 05:02:25.648573 kubelet[2546]: I0130 05:02:25.648265 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/711d6d61-5058-41f1-9b71-8c282678a441-xtables-lock\") pod \"kube-flannel-ds-lqgtn\" (UID: \"711d6d61-5058-41f1-9b71-8c282678a441\") " pod="kube-flannel/kube-flannel-ds-lqgtn" Jan 30 05:02:25.648573 kubelet[2546]: I0130 05:02:25.648281 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdcwk\" (UniqueName: \"kubernetes.io/projected/711d6d61-5058-41f1-9b71-8c282678a441-kube-api-access-vdcwk\") pod \"kube-flannel-ds-lqgtn\" (UID: \"711d6d61-5058-41f1-9b71-8c282678a441\") " pod="kube-flannel/kube-flannel-ds-lqgtn" Jan 30 05:02:25.648876 kubelet[2546]: I0130 05:02:25.648307 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/711d6d61-5058-41f1-9b71-8c282678a441-run\") pod \"kube-flannel-ds-lqgtn\" (UID: \"711d6d61-5058-41f1-9b71-8c282678a441\") " pod="kube-flannel/kube-flannel-ds-lqgtn" Jan 30 05:02:25.648876 kubelet[2546]: I0130 05:02:25.648322 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f36fbdb-c93d-4bbe-94a9-b3aec936dc92-kube-proxy\") pod \"kube-proxy-88875\" (UID: \"9f36fbdb-c93d-4bbe-94a9-b3aec936dc92\") " pod="kube-system/kube-proxy-88875" Jan 30 05:02:25.887689 kubelet[2546]: E0130 05:02:25.887531 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:25.889879 containerd[1465]: time="2025-01-30T05:02:25.889826379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-88875,Uid:9f36fbdb-c93d-4bbe-94a9-b3aec936dc92,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:25.898320 kubelet[2546]: E0130 05:02:25.897657 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:25.899205 containerd[1465]: time="2025-01-30T05:02:25.898892404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-lqgtn,Uid:711d6d61-5058-41f1-9b71-8c282678a441,Namespace:kube-flannel,Attempt:0,}" Jan 30 05:02:25.980389 containerd[1465]: time="2025-01-30T05:02:25.979828557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:25.980389 containerd[1465]: time="2025-01-30T05:02:25.980033517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:25.980389 containerd[1465]: time="2025-01-30T05:02:25.980064789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:25.980389 containerd[1465]: time="2025-01-30T05:02:25.980246951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:26.010047 containerd[1465]: time="2025-01-30T05:02:26.007781945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:26.010047 containerd[1465]: time="2025-01-30T05:02:26.009787504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:26.010047 containerd[1465]: time="2025-01-30T05:02:26.009813069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:26.010047 containerd[1465]: time="2025-01-30T05:02:26.009943082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:26.019661 systemd[1]: Started cri-containerd-c1d36b8602e0eae6583e5b1748b95c1be709d2ddd349c856a0a075c6fb67c6ab.scope - libcontainer container c1d36b8602e0eae6583e5b1748b95c1be709d2ddd349c856a0a075c6fb67c6ab. Jan 30 05:02:26.050697 systemd[1]: Started cri-containerd-4086fb2efff78e53ca35a12b092b4f490e633bbbb2d594289874aaddec05dc4e.scope - libcontainer container 4086fb2efff78e53ca35a12b092b4f490e633bbbb2d594289874aaddec05dc4e. Jan 30 05:02:26.087526 containerd[1465]: time="2025-01-30T05:02:26.087480500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-88875,Uid:9f36fbdb-c93d-4bbe-94a9-b3aec936dc92,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1d36b8602e0eae6583e5b1748b95c1be709d2ddd349c856a0a075c6fb67c6ab\"" Jan 30 05:02:26.090082 kubelet[2546]: E0130 05:02:26.090049 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:26.095190 containerd[1465]: time="2025-01-30T05:02:26.095037929Z" level=info msg="CreateContainer within sandbox \"c1d36b8602e0eae6583e5b1748b95c1be709d2ddd349c856a0a075c6fb67c6ab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 05:02:26.127909 containerd[1465]: time="2025-01-30T05:02:26.127849316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-lqgtn,Uid:711d6d61-5058-41f1-9b71-8c282678a441,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"4086fb2efff78e53ca35a12b092b4f490e633bbbb2d594289874aaddec05dc4e\"" Jan 30 05:02:26.129456 kubelet[2546]: E0130 05:02:26.129419 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:26.132840 containerd[1465]: time="2025-01-30T05:02:26.132668046Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 30 05:02:26.151893 containerd[1465]: time="2025-01-30T05:02:26.151152939Z" level=info msg="CreateContainer within sandbox \"c1d36b8602e0eae6583e5b1748b95c1be709d2ddd349c856a0a075c6fb67c6ab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6c746d25933227e2e5df2439a7e0147b36486ab01871882ba1b2fba681ae64ba\"" Jan 30 05:02:26.152342 containerd[1465]: time="2025-01-30T05:02:26.152309426Z" level=info msg="StartContainer for \"6c746d25933227e2e5df2439a7e0147b36486ab01871882ba1b2fba681ae64ba\"" Jan 30 05:02:26.191698 systemd[1]: Started cri-containerd-6c746d25933227e2e5df2439a7e0147b36486ab01871882ba1b2fba681ae64ba.scope - libcontainer container 6c746d25933227e2e5df2439a7e0147b36486ab01871882ba1b2fba681ae64ba. Jan 30 05:02:26.237660 containerd[1465]: time="2025-01-30T05:02:26.237596473Z" level=info msg="StartContainer for \"6c746d25933227e2e5df2439a7e0147b36486ab01871882ba1b2fba681ae64ba\" returns successfully" Jan 30 05:02:26.689411 kubelet[2546]: E0130 05:02:26.688955 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:28.207198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403780406.mount: Deactivated successfully. Jan 30 05:02:28.285429 containerd[1465]: time="2025-01-30T05:02:28.285355237Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:28.288036 containerd[1465]: time="2025-01-30T05:02:28.287925385Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 30 05:02:28.292232 containerd[1465]: time="2025-01-30T05:02:28.292120474Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:28.298532 containerd[1465]: time="2025-01-30T05:02:28.298420823Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:28.300216 containerd[1465]: time="2025-01-30T05:02:28.299959823Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.167217685s" Jan 30 05:02:28.300216 containerd[1465]: time="2025-01-30T05:02:28.300029509Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 30 05:02:28.305085 containerd[1465]: time="2025-01-30T05:02:28.305020513Z" level=info msg="CreateContainer within sandbox \"4086fb2efff78e53ca35a12b092b4f490e633bbbb2d594289874aaddec05dc4e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 30 05:02:28.336780 containerd[1465]: time="2025-01-30T05:02:28.336709576Z" level=info msg="CreateContainer within sandbox \"4086fb2efff78e53ca35a12b092b4f490e633bbbb2d594289874aaddec05dc4e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"a662beb71e315f90ca4cfd9c02f5a0b194bc0a1e05317cc3818d838ebd8710d8\"" Jan 30 05:02:28.337825 containerd[1465]: time="2025-01-30T05:02:28.337771650Z" level=info msg="StartContainer for \"a662beb71e315f90ca4cfd9c02f5a0b194bc0a1e05317cc3818d838ebd8710d8\"" Jan 30 05:02:28.395185 systemd[1]: Started cri-containerd-a662beb71e315f90ca4cfd9c02f5a0b194bc0a1e05317cc3818d838ebd8710d8.scope - libcontainer container a662beb71e315f90ca4cfd9c02f5a0b194bc0a1e05317cc3818d838ebd8710d8. Jan 30 05:02:28.436319 systemd[1]: cri-containerd-a662beb71e315f90ca4cfd9c02f5a0b194bc0a1e05317cc3818d838ebd8710d8.scope: Deactivated successfully. Jan 30 05:02:28.437203 containerd[1465]: time="2025-01-30T05:02:28.437152195Z" level=info msg="StartContainer for \"a662beb71e315f90ca4cfd9c02f5a0b194bc0a1e05317cc3818d838ebd8710d8\" returns successfully" Jan 30 05:02:28.513126 containerd[1465]: time="2025-01-30T05:02:28.513026997Z" level=info msg="shim disconnected" id=a662beb71e315f90ca4cfd9c02f5a0b194bc0a1e05317cc3818d838ebd8710d8 namespace=k8s.io Jan 30 05:02:28.513941 containerd[1465]: time="2025-01-30T05:02:28.513577448Z" level=warning msg="cleaning up after shim disconnected" id=a662beb71e315f90ca4cfd9c02f5a0b194bc0a1e05317cc3818d838ebd8710d8 namespace=k8s.io Jan 30 05:02:28.513941 containerd[1465]: time="2025-01-30T05:02:28.513617088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:02:28.690975 kubelet[2546]: E0130 05:02:28.690875 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:28.692689 containerd[1465]: time="2025-01-30T05:02:28.692654675Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 30 05:02:28.711282 kubelet[2546]: I0130 05:02:28.711183 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-88875" podStartSLOduration=3.71115541 podStartE2EDuration="3.71115541s" podCreationTimestamp="2025-01-30 05:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:02:26.69795076 +0000 UTC m=+16.352235690" watchObservedRunningTime="2025-01-30 05:02:28.71115541 +0000 UTC m=+18.365440343" Jan 30 05:02:29.083983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a662beb71e315f90ca4cfd9c02f5a0b194bc0a1e05317cc3818d838ebd8710d8-rootfs.mount: Deactivated successfully. Jan 30 05:02:30.997826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2239070223.mount: Deactivated successfully. Jan 30 05:02:32.153713 containerd[1465]: time="2025-01-30T05:02:32.153644819Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:32.157116 containerd[1465]: time="2025-01-30T05:02:32.156834434Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 30 05:02:32.160052 containerd[1465]: time="2025-01-30T05:02:32.159948864Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:32.166837 containerd[1465]: time="2025-01-30T05:02:32.166758534Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:32.169316 containerd[1465]: time="2025-01-30T05:02:32.169075711Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.476379489s" Jan 30 05:02:32.169316 containerd[1465]: time="2025-01-30T05:02:32.169143737Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 30 05:02:32.174466 containerd[1465]: time="2025-01-30T05:02:32.174252666Z" level=info msg="CreateContainer within sandbox \"4086fb2efff78e53ca35a12b092b4f490e633bbbb2d594289874aaddec05dc4e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 05:02:32.207525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3932463858.mount: Deactivated successfully. Jan 30 05:02:32.215233 containerd[1465]: time="2025-01-30T05:02:32.215131139Z" level=info msg="CreateContainer within sandbox \"4086fb2efff78e53ca35a12b092b4f490e633bbbb2d594289874aaddec05dc4e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"59d7b6870694446c339c793eba17824100e0d8db4bad3f29b8198565dad8aebf\"" Jan 30 05:02:32.218534 containerd[1465]: time="2025-01-30T05:02:32.217669953Z" level=info msg="StartContainer for \"59d7b6870694446c339c793eba17824100e0d8db4bad3f29b8198565dad8aebf\"" Jan 30 05:02:32.268647 systemd[1]: Started cri-containerd-59d7b6870694446c339c793eba17824100e0d8db4bad3f29b8198565dad8aebf.scope - libcontainer container 59d7b6870694446c339c793eba17824100e0d8db4bad3f29b8198565dad8aebf. Jan 30 05:02:32.308854 systemd[1]: cri-containerd-59d7b6870694446c339c793eba17824100e0d8db4bad3f29b8198565dad8aebf.scope: Deactivated successfully. Jan 30 05:02:32.312711 containerd[1465]: time="2025-01-30T05:02:32.312465966Z" level=info msg="StartContainer for \"59d7b6870694446c339c793eba17824100e0d8db4bad3f29b8198565dad8aebf\" returns successfully" Jan 30 05:02:32.344610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59d7b6870694446c339c793eba17824100e0d8db4bad3f29b8198565dad8aebf-rootfs.mount: Deactivated successfully. Jan 30 05:02:32.401232 kubelet[2546]: I0130 05:02:32.400921 2546 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 05:02:32.422675 containerd[1465]: time="2025-01-30T05:02:32.422062287Z" level=info msg="shim disconnected" id=59d7b6870694446c339c793eba17824100e0d8db4bad3f29b8198565dad8aebf namespace=k8s.io Jan 30 05:02:32.422675 containerd[1465]: time="2025-01-30T05:02:32.422143098Z" level=warning msg="cleaning up after shim disconnected" id=59d7b6870694446c339c793eba17824100e0d8db4bad3f29b8198565dad8aebf namespace=k8s.io Jan 30 05:02:32.422675 containerd[1465]: time="2025-01-30T05:02:32.422157088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:02:32.456072 kubelet[2546]: I0130 05:02:32.455233 2546 topology_manager.go:215] "Topology Admit Handler" podUID="50902aeb-7cd3-4b34-ac85-e78901136a9f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-p7wjm" Jan 30 05:02:32.461481 kubelet[2546]: I0130 05:02:32.461138 2546 topology_manager.go:215] "Topology Admit Handler" podUID="a558c77b-1eec-4538-a8e3-45caad1c9022" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h7qq4" Jan 30 05:02:32.467415 containerd[1465]: time="2025-01-30T05:02:32.464898877Z" level=warning msg="cleanup warnings time=\"2025-01-30T05:02:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 05:02:32.488088 systemd[1]: Created slice kubepods-burstable-pod50902aeb_7cd3_4b34_ac85_e78901136a9f.slice - libcontainer container kubepods-burstable-pod50902aeb_7cd3_4b34_ac85_e78901136a9f.slice. Jan 30 05:02:32.500142 kubelet[2546]: I0130 05:02:32.500055 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mxrg\" (UniqueName: \"kubernetes.io/projected/50902aeb-7cd3-4b34-ac85-e78901136a9f-kube-api-access-2mxrg\") pod \"coredns-7db6d8ff4d-p7wjm\" (UID: \"50902aeb-7cd3-4b34-ac85-e78901136a9f\") " pod="kube-system/coredns-7db6d8ff4d-p7wjm" Jan 30 05:02:32.500834 kubelet[2546]: I0130 05:02:32.500763 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50902aeb-7cd3-4b34-ac85-e78901136a9f-config-volume\") pod \"coredns-7db6d8ff4d-p7wjm\" (UID: \"50902aeb-7cd3-4b34-ac85-e78901136a9f\") " pod="kube-system/coredns-7db6d8ff4d-p7wjm" Jan 30 05:02:32.501347 kubelet[2546]: I0130 05:02:32.501213 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a558c77b-1eec-4538-a8e3-45caad1c9022-config-volume\") pod \"coredns-7db6d8ff4d-h7qq4\" (UID: \"a558c77b-1eec-4538-a8e3-45caad1c9022\") " pod="kube-system/coredns-7db6d8ff4d-h7qq4" Jan 30 05:02:32.502340 kubelet[2546]: I0130 05:02:32.502136 2546 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4qw2\" (UniqueName: \"kubernetes.io/projected/a558c77b-1eec-4538-a8e3-45caad1c9022-kube-api-access-z4qw2\") pod \"coredns-7db6d8ff4d-h7qq4\" (UID: \"a558c77b-1eec-4538-a8e3-45caad1c9022\") " pod="kube-system/coredns-7db6d8ff4d-h7qq4" Jan 30 05:02:32.503388 systemd[1]: Created slice kubepods-burstable-poda558c77b_1eec_4538_a8e3_45caad1c9022.slice - libcontainer container kubepods-burstable-poda558c77b_1eec_4538_a8e3_45caad1c9022.slice. Jan 30 05:02:32.712417 kubelet[2546]: E0130 05:02:32.710821 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:32.719337 containerd[1465]: time="2025-01-30T05:02:32.719263645Z" level=info msg="CreateContainer within sandbox \"4086fb2efff78e53ca35a12b092b4f490e633bbbb2d594289874aaddec05dc4e\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 30 05:02:32.757414 containerd[1465]: time="2025-01-30T05:02:32.757286750Z" level=info msg="CreateContainer within sandbox \"4086fb2efff78e53ca35a12b092b4f490e633bbbb2d594289874aaddec05dc4e\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"9f3020a68cc26074653a0afb5b0a23de414a250a496900eea6cafe935539b2f0\"" Jan 30 05:02:32.758132 containerd[1465]: time="2025-01-30T05:02:32.758063855Z" level=info msg="StartContainer for \"9f3020a68cc26074653a0afb5b0a23de414a250a496900eea6cafe935539b2f0\"" Jan 30 05:02:32.795036 kubelet[2546]: E0130 05:02:32.794421 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:32.797178 containerd[1465]: time="2025-01-30T05:02:32.796074072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p7wjm,Uid:50902aeb-7cd3-4b34-ac85-e78901136a9f,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:32.796816 systemd[1]: Started cri-containerd-9f3020a68cc26074653a0afb5b0a23de414a250a496900eea6cafe935539b2f0.scope - libcontainer container 9f3020a68cc26074653a0afb5b0a23de414a250a496900eea6cafe935539b2f0. Jan 30 05:02:32.812167 kubelet[2546]: E0130 05:02:32.811614 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:32.813553 containerd[1465]: time="2025-01-30T05:02:32.812607763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h7qq4,Uid:a558c77b-1eec-4538-a8e3-45caad1c9022,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:32.861260 containerd[1465]: time="2025-01-30T05:02:32.861210370Z" level=info msg="StartContainer for \"9f3020a68cc26074653a0afb5b0a23de414a250a496900eea6cafe935539b2f0\" returns successfully" Jan 30 05:02:32.926763 containerd[1465]: time="2025-01-30T05:02:32.924391202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p7wjm,Uid:50902aeb-7cd3-4b34-ac85-e78901136a9f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d64d9e048377758a8e6ad5374938ad36ca97a6f8251f69f6e521ffcc5a469cd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 05:02:32.926920 kubelet[2546]: E0130 05:02:32.926403 2546 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d64d9e048377758a8e6ad5374938ad36ca97a6f8251f69f6e521ffcc5a469cd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 05:02:32.926920 kubelet[2546]: E0130 05:02:32.926472 2546 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d64d9e048377758a8e6ad5374938ad36ca97a6f8251f69f6e521ffcc5a469cd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-p7wjm" Jan 30 05:02:32.926920 kubelet[2546]: E0130 05:02:32.926493 2546 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d64d9e048377758a8e6ad5374938ad36ca97a6f8251f69f6e521ffcc5a469cd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-p7wjm" Jan 30 05:02:32.926920 kubelet[2546]: E0130 05:02:32.926541 2546 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-p7wjm_kube-system(50902aeb-7cd3-4b34-ac85-e78901136a9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-p7wjm_kube-system(50902aeb-7cd3-4b34-ac85-e78901136a9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d64d9e048377758a8e6ad5374938ad36ca97a6f8251f69f6e521ffcc5a469cd\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-p7wjm" podUID="50902aeb-7cd3-4b34-ac85-e78901136a9f" Jan 30 05:02:32.927499 containerd[1465]: time="2025-01-30T05:02:32.927434020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h7qq4,Uid:a558c77b-1eec-4538-a8e3-45caad1c9022,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7e7c298e4c452b5f974d9a84b09484e82130a1f87bc6a92c9ab5a18a04977f8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 05:02:32.927972 kubelet[2546]: E0130 05:02:32.927706 2546 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e7c298e4c452b5f974d9a84b09484e82130a1f87bc6a92c9ab5a18a04977f8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 05:02:32.927972 kubelet[2546]: E0130 05:02:32.927766 2546 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e7c298e4c452b5f974d9a84b09484e82130a1f87bc6a92c9ab5a18a04977f8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-h7qq4" Jan 30 05:02:32.927972 kubelet[2546]: E0130 05:02:32.927793 2546 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e7c298e4c452b5f974d9a84b09484e82130a1f87bc6a92c9ab5a18a04977f8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-h7qq4" Jan 30 05:02:32.927972 kubelet[2546]: E0130 05:02:32.927843 2546 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h7qq4_kube-system(a558c77b-1eec-4538-a8e3-45caad1c9022)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h7qq4_kube-system(a558c77b-1eec-4538-a8e3-45caad1c9022)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7e7c298e4c452b5f974d9a84b09484e82130a1f87bc6a92c9ab5a18a04977f8\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-h7qq4" podUID="a558c77b-1eec-4538-a8e3-45caad1c9022" Jan 30 05:02:33.716796 kubelet[2546]: E0130 05:02:33.716365 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:33.970914 systemd-networkd[1366]: flannel.1: Link UP Jan 30 05:02:33.970926 systemd-networkd[1366]: flannel.1: Gained carrier Jan 30 05:02:34.720259 kubelet[2546]: E0130 05:02:34.719962 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:35.089543 systemd-networkd[1366]: flannel.1: Gained IPv6LL Jan 30 05:02:43.618929 kubelet[2546]: E0130 05:02:43.617538 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:43.620975 containerd[1465]: time="2025-01-30T05:02:43.618143186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p7wjm,Uid:50902aeb-7cd3-4b34-ac85-e78901136a9f,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:43.719332 systemd-networkd[1366]: cni0: Link UP Jan 30 05:02:43.719342 systemd-networkd[1366]: cni0: Gained carrier Jan 30 05:02:43.726693 systemd-networkd[1366]: cni0: Lost carrier Jan 30 05:02:43.734560 systemd-networkd[1366]: veth2ba6c68a: Link UP Jan 30 05:02:43.736863 kernel: cni0: port 1(veth2ba6c68a) entered blocking state Jan 30 05:02:43.736985 kernel: cni0: port 1(veth2ba6c68a) entered disabled state Jan 30 05:02:43.738643 kernel: veth2ba6c68a: entered allmulticast mode Jan 30 05:02:43.740223 kernel: veth2ba6c68a: entered promiscuous mode Jan 30 05:02:43.742715 kernel: cni0: port 1(veth2ba6c68a) entered blocking state Jan 30 05:02:43.742863 kernel: cni0: port 1(veth2ba6c68a) entered forwarding state Jan 30 05:02:43.743417 kernel: cni0: port 1(veth2ba6c68a) entered disabled state Jan 30 05:02:43.758604 kernel: cni0: port 1(veth2ba6c68a) entered blocking state Jan 30 05:02:43.758703 kernel: cni0: port 1(veth2ba6c68a) entered forwarding state Jan 30 05:02:43.765069 systemd-networkd[1366]: veth2ba6c68a: Gained carrier Jan 30 05:02:43.766538 systemd-networkd[1366]: cni0: Gained carrier Jan 30 05:02:43.770925 containerd[1465]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000106628), "name":"cbr0", "type":"bridge"} Jan 30 05:02:43.770925 containerd[1465]: delegateAdd: netconf sent to delegate plugin: Jan 30 05:02:43.798696 containerd[1465]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T05:02:43.798570018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:43.798696 containerd[1465]: time="2025-01-30T05:02:43.798633598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:43.798696 containerd[1465]: time="2025-01-30T05:02:43.798645285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:43.799115 containerd[1465]: time="2025-01-30T05:02:43.799032480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:43.827524 systemd[1]: run-containerd-runc-k8s.io-6c0dae169eb8a9556ed2242b6faf7832c5980bd1ec9ee06acc53acd899dfd6ab-runc.zuppYM.mount: Deactivated successfully. Jan 30 05:02:43.843649 systemd[1]: Started cri-containerd-6c0dae169eb8a9556ed2242b6faf7832c5980bd1ec9ee06acc53acd899dfd6ab.scope - libcontainer container 6c0dae169eb8a9556ed2242b6faf7832c5980bd1ec9ee06acc53acd899dfd6ab. Jan 30 05:02:43.904037 containerd[1465]: time="2025-01-30T05:02:43.903832733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p7wjm,Uid:50902aeb-7cd3-4b34-ac85-e78901136a9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c0dae169eb8a9556ed2242b6faf7832c5980bd1ec9ee06acc53acd899dfd6ab\"" Jan 30 05:02:43.905942 kubelet[2546]: E0130 05:02:43.905905 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:43.908995 containerd[1465]: time="2025-01-30T05:02:43.908952657Z" level=info msg="CreateContainer within sandbox \"6c0dae169eb8a9556ed2242b6faf7832c5980bd1ec9ee06acc53acd899dfd6ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:02:43.940841 containerd[1465]: time="2025-01-30T05:02:43.940772523Z" level=info msg="CreateContainer within sandbox \"6c0dae169eb8a9556ed2242b6faf7832c5980bd1ec9ee06acc53acd899dfd6ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64928cd643380d3e894556756fc5c75416d738cd6a1ac1b1b0fe7bcd1160729d\"" Jan 30 05:02:43.942969 containerd[1465]: time="2025-01-30T05:02:43.942908478Z" level=info msg="StartContainer for \"64928cd643380d3e894556756fc5c75416d738cd6a1ac1b1b0fe7bcd1160729d\"" Jan 30 05:02:43.982597 systemd[1]: Started cri-containerd-64928cd643380d3e894556756fc5c75416d738cd6a1ac1b1b0fe7bcd1160729d.scope - libcontainer container 64928cd643380d3e894556756fc5c75416d738cd6a1ac1b1b0fe7bcd1160729d. Jan 30 05:02:44.024884 containerd[1465]: time="2025-01-30T05:02:44.024824291Z" level=info msg="StartContainer for \"64928cd643380d3e894556756fc5c75416d738cd6a1ac1b1b0fe7bcd1160729d\" returns successfully" Jan 30 05:02:44.752030 kubelet[2546]: E0130 05:02:44.751599 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:44.766613 kubelet[2546]: I0130 05:02:44.766284 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-lqgtn" podStartSLOduration=13.726722238 podStartE2EDuration="19.766262696s" podCreationTimestamp="2025-01-30 05:02:25 +0000 UTC" firstStartedPulling="2025-01-30 05:02:26.131978491 +0000 UTC m=+15.786263402" lastFinishedPulling="2025-01-30 05:02:32.171518949 +0000 UTC m=+21.825803860" observedRunningTime="2025-01-30 05:02:33.73286244 +0000 UTC m=+23.387147371" watchObservedRunningTime="2025-01-30 05:02:44.766262696 +0000 UTC m=+34.420547626" Jan 30 05:02:44.782994 kubelet[2546]: I0130 05:02:44.782922 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-p7wjm" podStartSLOduration=19.782894765 podStartE2EDuration="19.782894765s" podCreationTimestamp="2025-01-30 05:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:02:44.767272628 +0000 UTC m=+34.421557559" watchObservedRunningTime="2025-01-30 05:02:44.782894765 +0000 UTC m=+34.437179696" Jan 30 05:02:45.329537 systemd-networkd[1366]: veth2ba6c68a: Gained IPv6LL Jan 30 05:02:45.393608 systemd-networkd[1366]: cni0: Gained IPv6LL Jan 30 05:02:45.752945 kubelet[2546]: E0130 05:02:45.752888 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:46.758400 kubelet[2546]: E0130 05:02:46.756953 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:47.273810 systemd[1]: Started sshd@8-137.184.189.202:22-147.75.109.163:60920.service - OpenSSH per-connection server daemon (147.75.109.163:60920). Jan 30 05:02:47.330085 sshd[3333]: Accepted publickey for core from 147.75.109.163 port 60920 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:47.332616 sshd[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:47.347227 systemd-logind[1447]: New session 8 of user core. Jan 30 05:02:47.359629 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 05:02:47.519959 sshd[3333]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:47.526970 systemd[1]: sshd@8-137.184.189.202:22-147.75.109.163:60920.service: Deactivated successfully. Jan 30 05:02:47.530281 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 05:02:47.531900 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Jan 30 05:02:47.533187 systemd-logind[1447]: Removed session 8. Jan 30 05:02:47.618993 kubelet[2546]: E0130 05:02:47.617680 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:47.619356 containerd[1465]: time="2025-01-30T05:02:47.618426774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h7qq4,Uid:a558c77b-1eec-4538-a8e3-45caad1c9022,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:47.665655 systemd-networkd[1366]: veth57c4f688: Link UP Jan 30 05:02:47.668522 kernel: cni0: port 2(veth57c4f688) entered blocking state Jan 30 05:02:47.668671 kernel: cni0: port 2(veth57c4f688) entered disabled state Jan 30 05:02:47.672326 kernel: veth57c4f688: entered allmulticast mode Jan 30 05:02:47.674529 kernel: veth57c4f688: entered promiscuous mode Jan 30 05:02:47.683286 kernel: cni0: port 2(veth57c4f688) entered blocking state Jan 30 05:02:47.683482 kernel: cni0: port 2(veth57c4f688) entered forwarding state Jan 30 05:02:47.683771 systemd-networkd[1366]: veth57c4f688: Gained carrier Jan 30 05:02:47.686463 containerd[1465]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Jan 30 05:02:47.686463 containerd[1465]: delegateAdd: netconf sent to delegate plugin: Jan 30 05:02:47.728094 containerd[1465]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T05:02:47.726592069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:47.728094 containerd[1465]: time="2025-01-30T05:02:47.727729394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:47.728094 containerd[1465]: time="2025-01-30T05:02:47.727756553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:47.728094 containerd[1465]: time="2025-01-30T05:02:47.727914106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:47.762574 systemd[1]: Started cri-containerd-422a95e4bfb4e49467e9f70af38b16158274efb07f80b1076a4b3f3e86ae481a.scope - libcontainer container 422a95e4bfb4e49467e9f70af38b16158274efb07f80b1076a4b3f3e86ae481a. Jan 30 05:02:47.824663 containerd[1465]: time="2025-01-30T05:02:47.824566278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h7qq4,Uid:a558c77b-1eec-4538-a8e3-45caad1c9022,Namespace:kube-system,Attempt:0,} returns sandbox id \"422a95e4bfb4e49467e9f70af38b16158274efb07f80b1076a4b3f3e86ae481a\"" Jan 30 05:02:47.825825 kubelet[2546]: E0130 05:02:47.825792 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:47.831695 containerd[1465]: time="2025-01-30T05:02:47.829845155Z" level=info msg="CreateContainer within sandbox \"422a95e4bfb4e49467e9f70af38b16158274efb07f80b1076a4b3f3e86ae481a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:02:47.866429 containerd[1465]: time="2025-01-30T05:02:47.866343070Z" level=info msg="CreateContainer within sandbox \"422a95e4bfb4e49467e9f70af38b16158274efb07f80b1076a4b3f3e86ae481a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06c687413cfa83bdae4f37984889e50aec326d6fc3cead23292adaae70ea18d8\"" Jan 30 05:02:47.867745 containerd[1465]: time="2025-01-30T05:02:47.867445403Z" level=info msg="StartContainer for \"06c687413cfa83bdae4f37984889e50aec326d6fc3cead23292adaae70ea18d8\"" Jan 30 05:02:47.904700 systemd[1]: Started cri-containerd-06c687413cfa83bdae4f37984889e50aec326d6fc3cead23292adaae70ea18d8.scope - libcontainer container 06c687413cfa83bdae4f37984889e50aec326d6fc3cead23292adaae70ea18d8. Jan 30 05:02:47.952632 containerd[1465]: time="2025-01-30T05:02:47.952564796Z" level=info msg="StartContainer for \"06c687413cfa83bdae4f37984889e50aec326d6fc3cead23292adaae70ea18d8\" returns successfully" Jan 30 05:02:48.774049 kubelet[2546]: E0130 05:02:48.770259 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:48.817207 kubelet[2546]: I0130 05:02:48.816263 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-h7qq4" podStartSLOduration=23.816236175 podStartE2EDuration="23.816236175s" podCreationTimestamp="2025-01-30 05:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:02:48.792307117 +0000 UTC m=+38.446592042" watchObservedRunningTime="2025-01-30 05:02:48.816236175 +0000 UTC m=+38.470521106" Jan 30 05:02:49.554585 systemd-networkd[1366]: veth57c4f688: Gained IPv6LL Jan 30 05:02:49.772417 kubelet[2546]: E0130 05:02:49.772324 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:50.715920 systemd[1]: Started sshd@9-137.184.189.202:22-116.105.221.82:60340.service - OpenSSH per-connection server daemon (116.105.221.82:60340). Jan 30 05:02:50.775069 kubelet[2546]: E0130 05:02:50.775011 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:02:51.894399 sshd[3480]: Invalid user ubnt from 116.105.221.82 port 60340 Jan 30 05:02:52.078353 sshd[3483]: pam_faillock(sshd:auth): User unknown Jan 30 05:02:52.083006 sshd[3480]: Postponed keyboard-interactive for invalid user ubnt from 116.105.221.82 port 60340 ssh2 [preauth] Jan 30 05:02:52.267338 sshd[3483]: pam_unix(sshd:auth): check pass; user unknown Jan 30 05:02:52.267388 sshd[3483]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.105.221.82 Jan 30 05:02:52.268109 sshd[3483]: pam_faillock(sshd:auth): User unknown Jan 30 05:02:52.540789 systemd[1]: Started sshd@10-137.184.189.202:22-147.75.109.163:47800.service - OpenSSH per-connection server daemon (147.75.109.163:47800). Jan 30 05:02:52.618962 sshd[3485]: Accepted publickey for core from 147.75.109.163 port 47800 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:52.621363 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:52.633517 systemd-logind[1447]: New session 9 of user core. Jan 30 05:02:52.637574 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 05:02:52.806778 sshd[3485]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:52.813500 systemd[1]: sshd@10-137.184.189.202:22-147.75.109.163:47800.service: Deactivated successfully. Jan 30 05:02:52.817921 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 05:02:52.819260 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Jan 30 05:02:52.821235 systemd-logind[1447]: Removed session 9. Jan 30 05:02:53.684720 sshd[3480]: PAM: Permission denied for illegal user ubnt from 116.105.221.82 Jan 30 05:02:53.685979 sshd[3480]: Failed keyboard-interactive/pam for invalid user ubnt from 116.105.221.82 port 60340 ssh2 Jan 30 05:02:53.892612 sshd[3480]: Connection closed by invalid user ubnt 116.105.221.82 port 60340 [preauth] Jan 30 05:02:53.895861 systemd[1]: sshd@9-137.184.189.202:22-116.105.221.82:60340.service: Deactivated successfully. Jan 30 05:02:57.828217 systemd[1]: Started sshd@11-137.184.189.202:22-147.75.109.163:50766.service - OpenSSH per-connection server daemon (147.75.109.163:50766). Jan 30 05:02:57.887797 sshd[3525]: Accepted publickey for core from 147.75.109.163 port 50766 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:57.889824 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:57.898422 systemd-logind[1447]: New session 10 of user core. Jan 30 05:02:57.904692 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 05:02:58.070570 sshd[3525]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:58.084725 systemd[1]: sshd@11-137.184.189.202:22-147.75.109.163:50766.service: Deactivated successfully. Jan 30 05:02:58.088697 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 05:02:58.091853 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Jan 30 05:02:58.096907 systemd[1]: Started sshd@12-137.184.189.202:22-147.75.109.163:50780.service - OpenSSH per-connection server daemon (147.75.109.163:50780). Jan 30 05:02:58.099723 systemd-logind[1447]: Removed session 10. Jan 30 05:02:58.173779 sshd[3539]: Accepted publickey for core from 147.75.109.163 port 50780 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:58.176050 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:58.185403 systemd-logind[1447]: New session 11 of user core. Jan 30 05:02:58.191712 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 05:02:58.420106 sshd[3539]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:58.432855 systemd[1]: sshd@12-137.184.189.202:22-147.75.109.163:50780.service: Deactivated successfully. Jan 30 05:02:58.436144 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 05:02:58.441115 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Jan 30 05:02:58.451725 systemd[1]: Started sshd@13-137.184.189.202:22-147.75.109.163:50782.service - OpenSSH per-connection server daemon (147.75.109.163:50782). Jan 30 05:02:58.458377 systemd-logind[1447]: Removed session 11. Jan 30 05:02:58.512384 sshd[3550]: Accepted publickey for core from 147.75.109.163 port 50782 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:58.514890 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:58.523384 systemd-logind[1447]: New session 12 of user core. Jan 30 05:02:58.529792 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 05:02:58.686116 sshd[3550]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:58.692591 systemd[1]: sshd@13-137.184.189.202:22-147.75.109.163:50782.service: Deactivated successfully. Jan 30 05:02:58.695921 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 05:02:58.697847 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Jan 30 05:02:58.699981 systemd-logind[1447]: Removed session 12. Jan 30 05:03:03.704841 systemd[1]: Started sshd@14-137.184.189.202:22-147.75.109.163:50786.service - OpenSSH per-connection server daemon (147.75.109.163:50786). Jan 30 05:03:03.760121 sshd[3585]: Accepted publickey for core from 147.75.109.163 port 50786 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:03.762222 sshd[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:03.769997 systemd-logind[1447]: New session 13 of user core. Jan 30 05:03:03.774599 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 05:03:03.932516 sshd[3585]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:03.939900 systemd[1]: sshd@14-137.184.189.202:22-147.75.109.163:50786.service: Deactivated successfully. Jan 30 05:03:03.943949 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 05:03:03.945198 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Jan 30 05:03:03.946608 systemd-logind[1447]: Removed session 13. Jan 30 05:03:08.958671 systemd[1]: Started sshd@15-137.184.189.202:22-147.75.109.163:37504.service - OpenSSH per-connection server daemon (147.75.109.163:37504). Jan 30 05:03:09.027161 sshd[3619]: Accepted publickey for core from 147.75.109.163 port 37504 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:09.029617 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:09.037340 systemd-logind[1447]: New session 14 of user core. Jan 30 05:03:09.045671 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 05:03:09.212685 sshd[3619]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:09.219713 systemd[1]: sshd@15-137.184.189.202:22-147.75.109.163:37504.service: Deactivated successfully. Jan 30 05:03:09.222606 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 05:03:09.224075 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Jan 30 05:03:09.226697 systemd-logind[1447]: Removed session 14. Jan 30 05:03:14.234828 systemd[1]: Started sshd@16-137.184.189.202:22-147.75.109.163:37508.service - OpenSSH per-connection server daemon (147.75.109.163:37508). Jan 30 05:03:14.282452 sshd[3660]: Accepted publickey for core from 147.75.109.163 port 37508 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:14.284141 sshd[3660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:14.305317 systemd-logind[1447]: New session 15 of user core. Jan 30 05:03:14.313657 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 05:03:14.464733 sshd[3660]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:14.470373 systemd[1]: sshd@16-137.184.189.202:22-147.75.109.163:37508.service: Deactivated successfully. Jan 30 05:03:14.473870 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 05:03:14.475245 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Jan 30 05:03:14.477860 systemd-logind[1447]: Removed session 15. Jan 30 05:03:19.489810 systemd[1]: Started sshd@17-137.184.189.202:22-147.75.109.163:59814.service - OpenSSH per-connection server daemon (147.75.109.163:59814). Jan 30 05:03:19.536602 sshd[3709]: Accepted publickey for core from 147.75.109.163 port 59814 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:19.539199 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:19.548436 systemd-logind[1447]: New session 16 of user core. Jan 30 05:03:19.556616 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 05:03:19.710484 sshd[3709]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:19.725826 systemd[1]: sshd@17-137.184.189.202:22-147.75.109.163:59814.service: Deactivated successfully. Jan 30 05:03:19.730780 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 05:03:19.732419 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Jan 30 05:03:19.741902 systemd[1]: Started sshd@18-137.184.189.202:22-147.75.109.163:59830.service - OpenSSH per-connection server daemon (147.75.109.163:59830). Jan 30 05:03:19.744817 systemd-logind[1447]: Removed session 16. Jan 30 05:03:19.802265 sshd[3722]: Accepted publickey for core from 147.75.109.163 port 59830 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:19.804678 sshd[3722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:19.812640 systemd-logind[1447]: New session 17 of user core. Jan 30 05:03:19.820730 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 05:03:20.249241 sshd[3722]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:20.265791 systemd[1]: sshd@18-137.184.189.202:22-147.75.109.163:59830.service: Deactivated successfully. Jan 30 05:03:20.269659 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 05:03:20.272671 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Jan 30 05:03:20.279800 systemd[1]: Started sshd@19-137.184.189.202:22-147.75.109.163:59836.service - OpenSSH per-connection server daemon (147.75.109.163:59836). Jan 30 05:03:20.282598 systemd-logind[1447]: Removed session 17. Jan 30 05:03:20.336493 sshd[3733]: Accepted publickey for core from 147.75.109.163 port 59836 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:20.339249 sshd[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:20.346987 systemd-logind[1447]: New session 18 of user core. Jan 30 05:03:20.355656 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 05:03:22.229639 sshd[3733]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:22.245457 systemd[1]: sshd@19-137.184.189.202:22-147.75.109.163:59836.service: Deactivated successfully. Jan 30 05:03:22.249232 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 05:03:22.251211 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Jan 30 05:03:22.266483 systemd[1]: Started sshd@20-137.184.189.202:22-147.75.109.163:59852.service - OpenSSH per-connection server daemon (147.75.109.163:59852). Jan 30 05:03:22.270815 systemd-logind[1447]: Removed session 18. Jan 30 05:03:22.326870 sshd[3753]: Accepted publickey for core from 147.75.109.163 port 59852 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:22.328817 sshd[3753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:22.337403 systemd-logind[1447]: New session 19 of user core. Jan 30 05:03:22.346616 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 05:03:22.671593 sshd[3753]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:22.682230 systemd[1]: sshd@20-137.184.189.202:22-147.75.109.163:59852.service: Deactivated successfully. Jan 30 05:03:22.685664 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 05:03:22.687851 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Jan 30 05:03:22.696727 systemd[1]: Started sshd@21-137.184.189.202:22-147.75.109.163:59866.service - OpenSSH per-connection server daemon (147.75.109.163:59866). Jan 30 05:03:22.699971 systemd-logind[1447]: Removed session 19. Jan 30 05:03:22.744553 sshd[3765]: Accepted publickey for core from 147.75.109.163 port 59866 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:22.747228 sshd[3765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:22.756406 systemd-logind[1447]: New session 20 of user core. Jan 30 05:03:22.766111 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 05:03:22.918976 sshd[3765]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:22.924790 systemd[1]: sshd@21-137.184.189.202:22-147.75.109.163:59866.service: Deactivated successfully. Jan 30 05:03:22.928899 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 05:03:22.930580 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Jan 30 05:03:22.932286 systemd-logind[1447]: Removed session 20. Jan 30 05:03:27.940767 systemd[1]: Started sshd@22-137.184.189.202:22-147.75.109.163:45036.service - OpenSSH per-connection server daemon (147.75.109.163:45036). Jan 30 05:03:27.991890 sshd[3800]: Accepted publickey for core from 147.75.109.163 port 45036 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:27.993899 sshd[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:28.001321 systemd-logind[1447]: New session 21 of user core. Jan 30 05:03:28.007622 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 05:03:28.169371 sshd[3800]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:28.173836 systemd[1]: sshd@22-137.184.189.202:22-147.75.109.163:45036.service: Deactivated successfully. Jan 30 05:03:28.176279 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 05:03:28.177632 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Jan 30 05:03:28.180867 systemd-logind[1447]: Removed session 21. Jan 30 05:03:28.619226 kubelet[2546]: E0130 05:03:28.617921 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:03:28.620159 kubelet[2546]: E0130 05:03:28.619955 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:03:33.188790 systemd[1]: Started sshd@23-137.184.189.202:22-147.75.109.163:45040.service - OpenSSH per-connection server daemon (147.75.109.163:45040). Jan 30 05:03:33.255664 sshd[3837]: Accepted publickey for core from 147.75.109.163 port 45040 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:33.257721 sshd[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:33.266789 systemd-logind[1447]: New session 22 of user core. Jan 30 05:03:33.279618 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 05:03:33.436674 sshd[3837]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:33.442174 systemd[1]: sshd@23-137.184.189.202:22-147.75.109.163:45040.service: Deactivated successfully. Jan 30 05:03:33.445869 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 05:03:33.447616 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Jan 30 05:03:33.449156 systemd-logind[1447]: Removed session 22. Jan 30 05:03:37.617804 kubelet[2546]: E0130 05:03:37.617684 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:03:38.460810 systemd[1]: Started sshd@24-137.184.189.202:22-147.75.109.163:40764.service - OpenSSH per-connection server daemon (147.75.109.163:40764). Jan 30 05:03:38.514749 sshd[3871]: Accepted publickey for core from 147.75.109.163 port 40764 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:38.517276 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:38.526452 systemd-logind[1447]: New session 23 of user core. Jan 30 05:03:38.535647 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 05:03:38.694651 sshd[3871]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:38.699235 systemd[1]: sshd@24-137.184.189.202:22-147.75.109.163:40764.service: Deactivated successfully. Jan 30 05:03:38.702535 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 05:03:38.705180 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Jan 30 05:03:38.707384 systemd-logind[1447]: Removed session 23. Jan 30 05:03:39.618258 kubelet[2546]: E0130 05:03:39.617759 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 05:03:43.717798 systemd[1]: Started sshd@25-137.184.189.202:22-147.75.109.163:40766.service - OpenSSH per-connection server daemon (147.75.109.163:40766). Jan 30 05:03:43.766857 sshd[3905]: Accepted publickey for core from 147.75.109.163 port 40766 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:43.769191 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:43.775550 systemd-logind[1447]: New session 24 of user core. Jan 30 05:03:43.780517 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 05:03:43.933527 sshd[3905]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:43.938938 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Jan 30 05:03:43.940680 systemd[1]: sshd@25-137.184.189.202:22-147.75.109.163:40766.service: Deactivated successfully. Jan 30 05:03:43.944496 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 05:03:43.946332 systemd-logind[1447]: Removed session 24.