Jan 17 12:17:05.344715 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:17:05.344782 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:17:05.344809 kernel: BIOS-provided physical RAM map: Jan 17 12:17:05.344822 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:17:05.344834 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:17:05.344847 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:17:05.344860 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 17 12:17:05.344871 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 17 12:17:05.344882 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:17:05.344896 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:17:05.344908 kernel: NX (Execute Disable) protection: active Jan 17 12:17:05.344920 kernel: APIC: Static calls initialized Jan 17 12:17:05.344938 kernel: SMBIOS 2.8 present. Jan 17 12:17:05.344948 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 12:17:05.344961 kernel: Hypervisor detected: KVM Jan 17 12:17:05.344976 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:17:05.345023 kernel: kvm-clock: using sched offset of 4045602280 cycles Jan 17 12:17:05.345037 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:17:05.345048 kernel: tsc: Detected 2000.000 MHz processor Jan 17 12:17:05.345060 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:17:05.345072 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:17:05.345083 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 17 12:17:05.345098 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:17:05.345109 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:17:05.345124 kernel: ACPI: Early table checksum verification disabled Jan 17 12:17:05.345135 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 17 12:17:05.345147 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:05.345158 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:05.345169 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:05.345179 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 12:17:05.345200 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:05.345212 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:05.345223 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:05.345238 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:05.345251 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 17 12:17:05.345265 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 17 12:17:05.345276 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 12:17:05.345288 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 17 12:17:05.345298 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 17 12:17:05.345316 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 17 12:17:05.345348 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 17 12:17:05.345361 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:17:05.345372 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:17:05.345384 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:17:05.345399 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:17:05.345422 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 17 12:17:05.345437 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 17 12:17:05.345515 kernel: Zone ranges: Jan 17 12:17:05.345531 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:17:05.345546 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 17 12:17:05.345561 kernel: Normal empty Jan 17 12:17:05.345577 kernel: Movable zone start for each node Jan 17 12:17:05.345592 kernel: Early memory node ranges Jan 17 12:17:05.345607 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:17:05.345623 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 17 12:17:05.345638 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 17 12:17:05.345661 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:17:05.345675 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:17:05.345696 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 17 12:17:05.345711 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:17:05.345726 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:17:05.345740 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:17:05.345755 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:17:05.345770 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:17:05.345785 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:17:05.345808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:17:05.345824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:17:05.345838 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:17:05.345854 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:17:05.345869 kernel: TSC deadline timer available Jan 17 12:17:05.345885 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:17:05.345900 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:17:05.345915 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 12:17:05.345938 kernel: Booting paravirtualized kernel on KVM Jan 17 12:17:05.345953 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:17:05.345975 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:17:05.345990 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:17:05.346005 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:17:05.346020 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:17:05.346036 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:17:05.346051 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:17:05.346064 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:17:05.346081 kernel: random: crng init done Jan 17 12:17:05.346094 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:17:05.346106 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:17:05.346118 kernel: Fallback order for Node 0: 0 Jan 17 12:17:05.346132 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 17 12:17:05.346145 kernel: Policy zone: DMA32 Jan 17 12:17:05.346158 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:17:05.346173 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125148K reserved, 0K cma-reserved) Jan 17 12:17:05.346186 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:17:05.346203 kernel: Kernel/User page tables isolation: enabled Jan 17 12:17:05.346216 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:17:05.346229 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:17:05.346242 kernel: Dynamic Preempt: voluntary Jan 17 12:17:05.346256 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:17:05.346271 kernel: rcu: RCU event tracing is enabled. Jan 17 12:17:05.346285 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:17:05.346298 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:17:05.346311 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:17:05.346328 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:17:05.346342 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:17:05.346355 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:17:05.346368 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:17:05.346382 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:17:05.346400 kernel: Console: colour VGA+ 80x25 Jan 17 12:17:05.346414 kernel: printk: console [tty0] enabled Jan 17 12:17:05.346428 kernel: printk: console [ttyS0] enabled Jan 17 12:17:05.346441 kernel: ACPI: Core revision 20230628 Jan 17 12:17:05.346468 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:17:05.346489 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:17:05.346502 kernel: x2apic enabled Jan 17 12:17:05.346515 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:17:05.346529 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:17:05.346542 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 17 12:17:05.346555 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jan 17 12:17:05.346569 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 12:17:05.346582 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 12:17:05.346610 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:17:05.346624 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:17:05.346639 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:17:05.346656 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:17:05.346670 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 12:17:05.346685 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:17:05.346699 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:17:05.346712 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:17:05.346727 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:17:05.346749 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:17:05.346763 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:17:05.346777 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:17:05.346791 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:17:05.346806 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:17:05.346820 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:17:05.346834 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:17:05.346848 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:17:05.346879 kernel: landlock: Up and running. Jan 17 12:17:05.346893 kernel: SELinux: Initializing. Jan 17 12:17:05.346908 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:17:05.346922 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:17:05.346937 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 12:17:05.346951 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:17:05.346965 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:17:05.346980 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:17:05.346994 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 12:17:05.347011 kernel: signal: max sigframe size: 1776 Jan 17 12:17:05.347025 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:17:05.347039 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:17:05.347053 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:17:05.347067 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:17:05.347081 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:17:05.347096 kernel: .... node #0, CPUs: #1 Jan 17 12:17:05.347109 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:17:05.347128 kernel: smpboot: Max logical packages: 1 Jan 17 12:17:05.347145 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jan 17 12:17:05.347159 kernel: devtmpfs: initialized Jan 17 12:17:05.347173 kernel: x86/mm: Memory block size: 128MB Jan 17 12:17:05.347187 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:17:05.347201 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:17:05.347215 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:17:05.347230 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:17:05.347244 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:17:05.347258 kernel: audit: type=2000 audit(1737116223.126:1): state=initialized audit_enabled=0 res=1 Jan 17 12:17:05.347275 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:17:05.347289 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:17:05.347303 kernel: cpuidle: using governor menu Jan 17 12:17:05.347317 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:17:05.347332 kernel: dca service started, version 1.12.1 Jan 17 12:17:05.347346 kernel: PCI: Using configuration type 1 for base access Jan 17 12:17:05.347360 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:17:05.347374 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:17:05.347388 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:17:05.347405 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:17:05.347419 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:17:05.347434 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:17:05.347448 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:17:05.347571 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:17:05.347586 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:17:05.347599 kernel: ACPI: Interpreter enabled Jan 17 12:17:05.347611 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:17:05.347624 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:17:05.347642 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:17:05.347655 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:17:05.348022 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 12:17:05.348045 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:17:05.348434 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:17:05.352940 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:17:05.353122 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:17:05.353153 kernel: acpiphp: Slot [3] registered Jan 17 12:17:05.353169 kernel: acpiphp: Slot [4] registered Jan 17 12:17:05.353185 kernel: acpiphp: Slot [5] registered Jan 17 12:17:05.353201 kernel: acpiphp: Slot [6] registered Jan 17 12:17:05.353217 kernel: acpiphp: Slot [7] registered Jan 17 12:17:05.353233 kernel: acpiphp: Slot [8] registered Jan 17 12:17:05.353248 kernel: acpiphp: Slot [9] registered Jan 17 12:17:05.353264 kernel: acpiphp: Slot [10] registered Jan 17 12:17:05.353279 kernel: acpiphp: Slot [11] registered Jan 17 12:17:05.353294 kernel: acpiphp: Slot [12] registered Jan 17 12:17:05.353314 kernel: acpiphp: Slot [13] registered Jan 17 12:17:05.353329 kernel: acpiphp: Slot [14] registered Jan 17 12:17:05.353344 kernel: acpiphp: Slot [15] registered Jan 17 12:17:05.353359 kernel: acpiphp: Slot [16] registered Jan 17 12:17:05.353374 kernel: acpiphp: Slot [17] registered Jan 17 12:17:05.353389 kernel: acpiphp: Slot [18] registered Jan 17 12:17:05.353404 kernel: acpiphp: Slot [19] registered Jan 17 12:17:05.353420 kernel: acpiphp: Slot [20] registered Jan 17 12:17:05.353435 kernel: acpiphp: Slot [21] registered Jan 17 12:17:05.353450 kernel: acpiphp: Slot [22] registered Jan 17 12:17:05.353480 kernel: acpiphp: Slot [23] registered Jan 17 12:17:05.353504 kernel: acpiphp: Slot [24] registered Jan 17 12:17:05.353524 kernel: acpiphp: Slot [25] registered Jan 17 12:17:05.353539 kernel: acpiphp: Slot [26] registered Jan 17 12:17:05.353553 kernel: acpiphp: Slot [27] registered Jan 17 12:17:05.353569 kernel: acpiphp: Slot [28] registered Jan 17 12:17:05.353585 kernel: acpiphp: Slot [29] registered Jan 17 12:17:05.353601 kernel: acpiphp: Slot [30] registered Jan 17 12:17:05.353616 kernel: acpiphp: Slot [31] registered Jan 17 12:17:05.353631 kernel: PCI host bridge to bus 0000:00 Jan 17 12:17:05.353857 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:17:05.354033 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:17:05.354314 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:17:05.356489 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 12:17:05.356805 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 12:17:05.356952 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:17:05.357152 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:17:05.357332 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:17:05.357539 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 12:17:05.357697 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 12:17:05.357849 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:17:05.357999 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:17:05.358149 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:17:05.358299 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:17:05.360585 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 12:17:05.360815 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 12:17:05.360990 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:17:05.361135 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 12:17:05.361280 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 12:17:05.361446 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:17:05.361620 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 12:17:05.361774 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 12:17:05.361947 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 12:17:05.362093 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 12:17:05.362239 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:17:05.362432 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:17:05.364714 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 12:17:05.364870 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 12:17:05.365017 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 12:17:05.365169 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:17:05.365308 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 12:17:05.365451 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 12:17:05.365609 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 12:17:05.365780 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 12:17:05.365921 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 12:17:05.366108 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 12:17:05.366248 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 12:17:05.366423 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:17:05.368714 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:17:05.368885 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 12:17:05.369021 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 12:17:05.369180 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:17:05.370179 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 12:17:05.370369 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 12:17:05.370517 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 12:17:05.370660 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 12:17:05.370791 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 12:17:05.370933 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 12:17:05.370964 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:17:05.370978 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:17:05.370991 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:17:05.371003 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:17:05.371016 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:17:05.371029 kernel: iommu: Default domain type: Translated Jan 17 12:17:05.371042 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:17:05.371055 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:17:05.371068 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:17:05.371084 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:17:05.371097 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 17 12:17:05.371232 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 12:17:05.371361 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 12:17:05.373545 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:17:05.373562 kernel: vgaarb: loaded Jan 17 12:17:05.373576 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:17:05.373589 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:17:05.373601 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:17:05.373620 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:17:05.373634 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:17:05.373646 kernel: pnp: PnP ACPI init Jan 17 12:17:05.373659 kernel: pnp: PnP ACPI: found 4 devices Jan 17 12:17:05.373672 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:17:05.373685 kernel: NET: Registered PF_INET protocol family Jan 17 12:17:05.373698 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:17:05.373711 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:17:05.373728 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:17:05.373740 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:17:05.373753 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:17:05.373766 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:17:05.373779 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:17:05.373791 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:17:05.373804 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:17:05.373817 kernel: NET: Registered PF_XDP protocol family Jan 17 12:17:05.373957 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:17:05.374080 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:17:05.374223 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:17:05.374348 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 12:17:05.374493 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 12:17:05.374649 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 12:17:05.374803 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:17:05.374825 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:17:05.375674 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 49410 usecs Jan 17 12:17:05.375702 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:17:05.375716 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:17:05.375730 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 17 12:17:05.375744 kernel: Initialise system trusted keyrings Jan 17 12:17:05.375758 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:17:05.375771 kernel: Key type asymmetric registered Jan 17 12:17:05.375784 kernel: Asymmetric key parser 'x509' registered Jan 17 12:17:05.375797 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:17:05.375811 kernel: io scheduler mq-deadline registered Jan 17 12:17:05.375828 kernel: io scheduler kyber registered Jan 17 12:17:05.375841 kernel: io scheduler bfq registered Jan 17 12:17:05.375854 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:17:05.375868 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 12:17:05.375882 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:17:05.375895 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:17:05.375908 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:17:05.375922 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:17:05.375935 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:17:05.375951 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:17:05.375964 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:17:05.376152 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 12:17:05.376268 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 12:17:05.376381 kernel: rtc_cmos 00:03: setting system clock to 2025-01-17T12:17:04 UTC (1737116224) Jan 17 12:17:05.377535 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 12:17:05.377551 kernel: intel_pstate: CPU model not supported Jan 17 12:17:05.377564 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:17:05.377582 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:17:05.377596 kernel: Segment Routing with IPv6 Jan 17 12:17:05.377608 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:17:05.377621 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:17:05.377635 kernel: Key type dns_resolver registered Jan 17 12:17:05.377647 kernel: IPI shorthand broadcast: enabled Jan 17 12:17:05.377659 kernel: sched_clock: Marking stable (1738006270, 205883496)->(2085342880, -141453114) Jan 17 12:17:05.377672 kernel: registered taskstats version 1 Jan 17 12:17:05.377685 kernel: Loading compiled-in X.509 certificates Jan 17 12:17:05.377701 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:17:05.377713 kernel: Key type .fscrypt registered Jan 17 12:17:05.377726 kernel: Key type fscrypt-provisioning registered Jan 17 12:17:05.377739 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:17:05.377751 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:17:05.377763 kernel: ima: No architecture policies found Jan 17 12:17:05.377776 kernel: clk: Disabling unused clocks Jan 17 12:17:05.377788 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:17:05.377804 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:17:05.377838 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:17:05.377855 kernel: Run /init as init process Jan 17 12:17:05.377868 kernel: with arguments: Jan 17 12:17:05.377882 kernel: /init Jan 17 12:17:05.377894 kernel: with environment: Jan 17 12:17:05.377908 kernel: HOME=/ Jan 17 12:17:05.377937 kernel: TERM=linux Jan 17 12:17:05.377950 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:17:05.377967 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:17:05.377988 systemd[1]: Detected virtualization kvm. Jan 17 12:17:05.378002 systemd[1]: Detected architecture x86-64. Jan 17 12:17:05.378016 systemd[1]: Running in initrd. Jan 17 12:17:05.378029 systemd[1]: No hostname configured, using default hostname. Jan 17 12:17:05.378042 systemd[1]: Hostname set to <localhost>. Jan 17 12:17:05.378056 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:17:05.378073 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:17:05.378087 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:17:05.378101 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:17:05.378115 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:17:05.378129 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:17:05.378143 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:17:05.378157 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:17:05.378174 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:17:05.378191 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:17:05.378209 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:17:05.378223 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:05.378237 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:17:05.378250 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:17:05.378265 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:17:05.378282 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:17:05.378295 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:17:05.378436 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:17:05.380495 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:17:05.380516 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:17:05.380531 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:17:05.380546 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:17:05.380567 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:17:05.380582 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:17:05.380596 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:17:05.380610 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:17:05.380624 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:17:05.380638 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:17:05.380652 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:17:05.380666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:17:05.380680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:05.380698 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:17:05.380712 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:17:05.380726 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:17:05.380742 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:17:05.380810 systemd-journald[182]: Collecting audit messages is disabled. Jan 17 12:17:05.380843 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:17:05.380859 systemd-journald[182]: Journal started Jan 17 12:17:05.380892 systemd-journald[182]: Runtime Journal (/run/log/journal/89b9f1a62a2d47c680164bda6cc1036b) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:17:05.347530 systemd-modules-load[183]: Inserted module 'overlay' Jan 17 12:17:05.430971 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:17:05.431024 kernel: Bridge firewalling registered Jan 17 12:17:05.397045 systemd-modules-load[183]: Inserted module 'br_netfilter' Jan 17 12:17:05.434719 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:17:05.443294 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:17:05.444586 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:05.461605 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:17:05.468855 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:17:05.484709 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:17:05.507887 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:17:05.511093 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:05.516216 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:05.518666 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:17:05.530961 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:17:05.532995 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:05.551789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:17:05.567196 dracut-cmdline[216]: dracut-dracut-053 Jan 17 12:17:05.581802 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:17:05.613945 systemd-resolved[218]: Positive Trust Anchors: Jan 17 12:17:05.613970 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:17:05.614016 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:17:05.624863 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 17 12:17:05.628010 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:17:05.629439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:05.767665 kernel: SCSI subsystem initialized Jan 17 12:17:05.787093 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:17:05.803527 kernel: iscsi: registered transport (tcp) Jan 17 12:17:05.834720 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:17:05.834807 kernel: QLogic iSCSI HBA Driver Jan 17 12:17:05.925802 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:17:05.935259 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:17:05.980104 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:17:05.980204 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:17:05.983662 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:17:06.058521 kernel: raid6: avx2x4 gen() 14711 MB/s Jan 17 12:17:06.076737 kernel: raid6: avx2x2 gen() 16280 MB/s Jan 17 12:17:06.100748 kernel: raid6: avx2x1 gen() 12432 MB/s Jan 17 12:17:06.101999 kernel: raid6: using algorithm avx2x2 gen() 16280 MB/s Jan 17 12:17:06.127351 kernel: raid6: .... xor() 8319 MB/s, rmw enabled Jan 17 12:17:06.127466 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:17:06.197498 kernel: xor: automatically using best checksumming function avx Jan 17 12:17:06.484530 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:17:06.528388 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:17:06.547014 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:06.575534 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 17 12:17:06.636847 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:06.674714 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:17:06.713686 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 17 12:17:06.817169 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:17:06.844549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:17:07.007037 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:17:07.052017 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:17:07.070560 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:17:07.096258 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:17:07.097205 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:17:07.099964 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:17:07.108945 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:17:07.150891 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:17:07.185499 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 12:17:07.421004 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 12:17:07.421241 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:17:07.421442 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:17:07.422624 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:17:07.422665 kernel: GPT:9289727 != 125829119 Jan 17 12:17:07.422684 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:17:07.422702 kernel: GPT:9289727 != 125829119 Jan 17 12:17:07.422730 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:17:07.422748 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:17:07.422766 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:17:07.422784 kernel: AES CTR mode by8 optimization enabled Jan 17 12:17:07.422801 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 12:17:07.430095 kernel: virtio_blk virtio5: [vdb] 952 512-byte logical blocks (487 kB/476 KiB) Jan 17 12:17:07.315753 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:17:07.315969 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:07.317352 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:17:07.318524 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:07.318770 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:07.320055 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:07.350077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:07.480979 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:07.491763 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:17:07.559664 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:07.593703 kernel: ACPI: bus type USB registered Jan 17 12:17:07.593793 kernel: usbcore: registered new interface driver usbfs Jan 17 12:17:07.593813 kernel: usbcore: registered new interface driver hub Jan 17 12:17:07.603737 kernel: usbcore: registered new device driver usb Jan 17 12:17:07.621505 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Jan 17 12:17:07.631517 kernel: libata version 3.00 loaded. Jan 17 12:17:07.669556 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (456) Jan 17 12:17:07.669675 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 12:17:07.699486 kernel: scsi host1: ata_piix Jan 17 12:17:07.700447 kernel: scsi host2: ata_piix Jan 17 12:17:07.700722 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 12:17:07.700742 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 12:17:07.722818 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:17:07.731154 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 12:17:07.741590 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 12:17:07.741846 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 12:17:07.742056 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 12:17:07.742241 kernel: hub 1-0:1.0: USB hub found Jan 17 12:17:07.742534 kernel: hub 1-0:1.0: 2 ports detected Jan 17 12:17:07.746980 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:17:07.770436 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:17:07.781055 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:17:07.782130 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:17:07.798907 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:17:07.814995 disk-uuid[554]: Primary Header is updated. Jan 17 12:17:07.814995 disk-uuid[554]: Secondary Entries is updated. Jan 17 12:17:07.814995 disk-uuid[554]: Secondary Header is updated. Jan 17 12:17:07.836624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:17:07.855534 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:17:08.870637 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:17:08.872516 disk-uuid[555]: The operation has completed successfully. Jan 17 12:17:09.032925 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:17:09.034318 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:17:09.060804 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:17:09.073398 sh[566]: Success Jan 17 12:17:09.132568 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:17:09.267949 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:17:09.275676 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:17:09.307341 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:17:09.334058 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:17:09.334183 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:09.337604 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:17:09.340971 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:17:09.341062 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:17:09.365716 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:17:09.367526 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:17:09.373936 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:17:09.379773 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:17:09.438728 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:09.438834 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:09.441528 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:17:09.471394 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:17:09.511779 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:17:09.529298 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:09.573960 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:17:09.592125 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:17:09.835711 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:17:09.873591 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:17:09.907234 systemd-networkd[751]: lo: Link UP Jan 17 12:17:09.907270 systemd-networkd[751]: lo: Gained carrier Jan 17 12:17:09.914423 systemd-networkd[751]: Enumeration completed Jan 17 12:17:09.914662 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:17:09.915778 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:17:09.915784 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 12:17:09.917626 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:09.917631 systemd-networkd[751]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:17:09.955094 systemd[1]: Reached target network.target - Network. Jan 17 12:17:09.956651 systemd-networkd[751]: eth0: Link UP Jan 17 12:17:09.956657 systemd-networkd[751]: eth0: Gained carrier Jan 17 12:17:09.956676 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:17:09.962415 systemd-networkd[751]: eth1: Link UP Jan 17 12:17:09.962422 systemd-networkd[751]: eth1: Gained carrier Jan 17 12:17:09.962443 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:09.987610 systemd-networkd[751]: eth0: DHCPv4 address 143.198.98.155/20, gateway 143.198.96.1 acquired from 169.254.169.253 Jan 17 12:17:10.029755 systemd-networkd[751]: eth1: DHCPv4 address 10.124.0.4/20 acquired from 169.254.169.253 Jan 17 12:17:10.033593 ignition[678]: Ignition 2.19.0 Jan 17 12:17:10.033605 ignition[678]: Stage: fetch-offline Jan 17 12:17:10.033661 ignition[678]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:10.033676 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:10.034108 ignition[678]: parsed url from cmdline: "" Jan 17 12:17:10.034116 ignition[678]: no config URL provided Jan 17 12:17:10.034126 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:17:10.034141 ignition[678]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:17:10.034150 ignition[678]: failed to fetch config: resource requires networking Jan 17 12:17:10.040360 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:17:10.034448 ignition[678]: Ignition finished successfully Jan 17 12:17:10.067226 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:17:10.098816 ignition[761]: Ignition 2.19.0 Jan 17 12:17:10.098833 ignition[761]: Stage: fetch Jan 17 12:17:10.099279 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:10.099300 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:10.099558 ignition[761]: parsed url from cmdline: "" Jan 17 12:17:10.099565 ignition[761]: no config URL provided Jan 17 12:17:10.099574 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:17:10.099589 ignition[761]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:17:10.099636 ignition[761]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 12:17:10.169326 ignition[761]: GET result: OK Jan 17 12:17:10.169678 ignition[761]: parsing config with SHA512: 1aaf0eb5a38bb75375638b6b6e2589226c43a024ed6bdefa447e6d4dbf5cc16258166daad33c2395d9b96fad591d4be736c6631b20b64a3fef3fcb1d2b5d6193 Jan 17 12:17:10.177557 unknown[761]: fetched base config from "system" Jan 17 12:17:10.178435 unknown[761]: fetched base config from "system" Jan 17 12:17:10.179392 ignition[761]: fetch: fetch complete Jan 17 12:17:10.178445 unknown[761]: fetched user config from "digitalocean" Jan 17 12:17:10.179399 ignition[761]: fetch: fetch passed Jan 17 12:17:10.186383 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:17:10.179546 ignition[761]: Ignition finished successfully Jan 17 12:17:10.205371 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:17:10.248267 ignition[768]: Ignition 2.19.0 Jan 17 12:17:10.249999 ignition[768]: Stage: kargs Jan 17 12:17:10.250409 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:10.250428 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:10.252322 ignition[768]: kargs: kargs passed Jan 17 12:17:10.252404 ignition[768]: Ignition finished successfully Jan 17 12:17:10.255233 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:17:10.283382 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:17:10.334993 ignition[774]: Ignition 2.19.0 Jan 17 12:17:10.335008 ignition[774]: Stage: disks Jan 17 12:17:10.335672 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:10.335696 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:10.337519 ignition[774]: disks: disks passed Jan 17 12:17:10.342176 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:17:10.337615 ignition[774]: Ignition finished successfully Jan 17 12:17:10.365063 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:17:10.365781 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:17:10.366448 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:17:10.367046 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:17:10.368015 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:17:10.388866 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:17:10.409358 systemd-fsck[783]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:17:10.420421 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:17:10.437133 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:17:10.676744 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:17:10.682746 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:17:10.684743 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:17:10.705140 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:17:10.712667 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:17:10.738693 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 12:17:10.752218 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:17:10.754362 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:17:10.756013 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:17:10.759676 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:17:10.781268 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:17:10.850526 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (791) Jan 17 12:17:10.867300 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:10.867394 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:10.867415 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:17:10.899503 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:17:10.904869 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:17:10.976936 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:17:10.993585 coreos-metadata[794]: Jan 17 12:17:10.988 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:17:11.006526 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:17:11.011038 coreos-metadata[793]: Jan 17 12:17:11.010 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:17:11.012823 coreos-metadata[794]: Jan 17 12:17:11.011 INFO Fetch successful Jan 17 12:17:11.028125 coreos-metadata[793]: Jan 17 12:17:11.025 INFO Fetch successful Jan 17 12:17:11.028004 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:17:11.031406 coreos-metadata[794]: Jan 17 12:17:11.026 INFO wrote hostname ci-4081.3.0-a-89c7b8b189 to /sysroot/etc/hostname Jan 17 12:17:11.034647 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:17:11.042315 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:17:11.056394 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 12:17:11.056625 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 12:17:11.232335 systemd-networkd[751]: eth1: Gained IPv6LL Jan 17 12:17:11.243790 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:17:11.261610 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:17:11.266345 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:17:11.297094 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:17:11.303508 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:11.383429 ignition[912]: INFO : Ignition 2.19.0 Jan 17 12:17:11.383429 ignition[912]: INFO : Stage: mount Jan 17 12:17:11.385591 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:11.385591 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:11.396881 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:17:11.403696 ignition[912]: INFO : mount: mount passed Jan 17 12:17:11.403696 ignition[912]: INFO : Ignition finished successfully Jan 17 12:17:11.428223 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:17:11.444616 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:17:11.699727 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:17:11.742919 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) Jan 17 12:17:11.743612 systemd-networkd[751]: eth0: Gained IPv6LL Jan 17 12:17:11.751119 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:11.751263 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:11.752916 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:17:11.757497 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:17:11.760833 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:17:11.810509 ignition[939]: INFO : Ignition 2.19.0 Jan 17 12:17:11.812896 ignition[939]: INFO : Stage: files Jan 17 12:17:11.815092 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:11.818484 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:11.820515 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:17:11.834189 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:17:11.835499 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:17:11.851449 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:17:11.853147 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:17:11.856942 unknown[939]: wrote ssh authorized keys file for user: core Jan 17 12:17:11.858157 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:17:11.874755 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:17:11.877801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:17:11.877801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:17:11.877801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:17:12.073550 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:17:12.194928 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:17:12.194928 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:17:12.197801 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:17:12.719529 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:17:13.374111 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:17:13.374111 ignition[939]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 12:17:13.379839 ignition[939]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:17:13.379839 ignition[939]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:17:13.379839 ignition[939]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 12:17:13.379839 ignition[939]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 12:17:13.379839 ignition[939]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:17:13.379839 ignition[939]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:17:13.379839 ignition[939]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 12:17:13.379839 ignition[939]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:17:13.379839 ignition[939]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:17:13.379839 ignition[939]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:17:13.379839 ignition[939]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:17:13.379839 ignition[939]: INFO : files: files passed Jan 17 12:17:13.379839 ignition[939]: INFO : Ignition finished successfully Jan 17 12:17:13.394147 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:17:13.426584 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:17:13.431530 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:17:13.440602 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:17:13.441521 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:17:13.488616 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:17:13.488616 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:17:13.495286 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:17:13.498029 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:17:13.499761 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:17:13.517091 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:17:13.585357 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:17:13.586805 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:17:13.594073 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:17:13.611159 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:17:13.612039 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:17:13.623945 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:17:13.674270 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:17:13.686279 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:17:13.704002 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:13.706670 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:17:13.707964 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:17:13.710336 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:17:13.710654 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:17:13.719555 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:17:13.720817 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:17:13.724248 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:17:13.727289 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:17:13.730293 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:17:13.733067 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:17:13.734395 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:17:13.736771 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:17:13.740579 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:17:13.742554 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:17:13.743584 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:17:13.743815 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:17:13.745624 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:13.746705 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:17:13.748287 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:17:13.750309 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:17:13.752738 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:17:13.762892 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:17:13.767311 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:17:13.767663 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:17:13.768718 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:17:13.768913 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:17:13.770044 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:17:13.770296 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:17:13.789290 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:17:13.790053 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:17:13.790375 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:17:13.794954 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:17:13.796118 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:17:13.798213 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:17:13.803922 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:17:13.804258 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:17:13.817481 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:17:13.818446 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:17:13.854501 ignition[993]: INFO : Ignition 2.19.0 Jan 17 12:17:13.854501 ignition[993]: INFO : Stage: umount Jan 17 12:17:13.854501 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:13.854501 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:13.859821 ignition[993]: INFO : umount: umount passed Jan 17 12:17:13.859821 ignition[993]: INFO : Ignition finished successfully Jan 17 12:17:13.866306 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:17:13.872764 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:17:13.873042 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:17:13.874292 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:17:13.874378 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:17:13.875405 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:17:13.875550 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:17:13.877795 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:17:13.877905 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:17:13.878665 systemd[1]: Stopped target network.target - Network. Jan 17 12:17:13.879338 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:17:13.879509 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:17:13.880402 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:17:13.881015 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:17:13.884216 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:17:13.885446 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:17:13.886810 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:17:13.889263 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:17:13.889343 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:17:13.890078 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:17:13.890138 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:17:13.890865 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:17:13.890962 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:17:13.891795 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:17:13.891877 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:17:13.895087 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:17:13.900244 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:17:13.901593 systemd-networkd[751]: eth1: DHCPv6 lease lost Jan 17 12:17:13.903106 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:17:13.903352 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:17:13.907247 systemd-networkd[751]: eth0: DHCPv6 lease lost Jan 17 12:17:13.913276 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:17:13.913922 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:17:13.922366 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:17:13.922610 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:17:13.927065 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:17:13.927377 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:17:13.934434 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:17:13.935083 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:17:13.947032 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:17:13.952149 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:17:13.952281 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:17:13.953202 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:17:13.953281 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:13.955737 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:17:13.955849 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:17:13.956654 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:17:13.956725 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:13.964762 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:13.999190 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:17:13.999551 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:14.004136 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:17:14.004259 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:17:14.008635 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:17:14.008752 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:17:14.011111 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:17:14.011216 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:17:14.012638 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:17:14.012759 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:17:14.014831 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:17:14.015072 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:17:14.017026 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:17:14.017175 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:14.027045 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:17:14.027994 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:17:14.028131 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:17:14.031887 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:14.032049 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:14.077891 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:17:14.080116 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:17:14.085407 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:17:14.098958 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:17:14.134918 systemd[1]: Switching root. Jan 17 12:17:14.206110 systemd-journald[182]: Journal stopped Jan 17 12:17:16.101673 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jan 17 12:17:16.101786 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:17:16.101809 kernel: SELinux: policy capability open_perms=1 Jan 17 12:17:16.101823 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:17:16.101835 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:17:16.101847 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:17:16.101858 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:17:16.101879 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:17:16.101890 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:17:16.101901 systemd[1]: Successfully loaded SELinux policy in 58.631ms. Jan 17 12:17:16.101926 kernel: audit: type=1403 audit(1737116234.645:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:17:16.101938 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.665ms. Jan 17 12:17:16.101953 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:17:16.101968 systemd[1]: Detected virtualization kvm. Jan 17 12:17:16.101980 systemd[1]: Detected architecture x86-64. Jan 17 12:17:16.101994 systemd[1]: Detected first boot. Jan 17 12:17:16.102007 systemd[1]: Hostname set to <ci-4081.3.0-a-89c7b8b189>. Jan 17 12:17:16.102018 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:17:16.102030 zram_generator::config[1054]: No configuration found. Jan 17 12:17:16.102043 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:17:16.102055 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:17:16.102067 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:17:16.102080 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:17:16.102091 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:17:16.102105 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:17:16.102117 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:17:16.102128 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:17:16.102140 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:17:16.102151 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:17:16.102163 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:17:16.102174 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:17:16.102188 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:17:16.102199 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:17:16.102218 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:17:16.102231 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:17:16.102243 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:17:16.102255 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:17:16.102266 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:17:16.102277 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:17:16.102289 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:17:16.102303 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:17:16.102315 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:17:16.102326 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:17:16.102337 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:17:16.102349 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:17:16.102360 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:17:16.102371 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:17:16.102382 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:17:16.102396 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:17:16.102408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:17:16.102419 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:17:16.102430 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:17:16.102442 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:17:16.103524 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:17:16.103582 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:16.103605 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:17:16.103627 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:17:16.103651 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:17:16.103667 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:17:16.103684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:16.103700 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:17:16.103717 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:17:16.103734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:16.103751 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:17:16.103770 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:16.103808 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:17:16.103825 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:16.103841 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:17:16.103859 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:17:16.103876 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:17:16.103893 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:17:16.103910 kernel: fuse: init (API version 7.39) Jan 17 12:17:16.103927 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:17:16.103944 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:17:16.103966 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:17:16.103982 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:17:16.103999 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:16.104018 kernel: loop: module loaded Jan 17 12:17:16.104093 systemd-journald[1149]: Collecting audit messages is disabled. Jan 17 12:17:16.104127 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:17:16.104145 systemd-journald[1149]: Journal started Jan 17 12:17:16.104182 systemd-journald[1149]: Runtime Journal (/run/log/journal/89b9f1a62a2d47c680164bda6cc1036b) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:17:16.112649 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:17:16.117165 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:17:16.123845 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:17:16.126900 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:17:16.127771 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:17:16.128619 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:17:16.129698 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:17:16.131108 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:17:16.131392 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:17:16.133274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:16.133566 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:16.134985 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:16.135241 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:16.143225 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:17:16.143449 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:17:16.145693 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:16.145970 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:16.147533 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:17:16.148818 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:17:16.151249 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:17:16.152827 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:17:16.178668 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:17:16.209492 kernel: ACPI: bus type drm_connector registered Jan 17 12:17:16.209895 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:17:16.218141 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:17:16.220741 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:17:16.230861 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:17:16.243864 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:17:16.247738 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:17:16.261645 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:17:16.262628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:17:16.272730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:17:16.286353 systemd-journald[1149]: Time spent on flushing to /var/log/journal/89b9f1a62a2d47c680164bda6cc1036b is 87.467ms for 969 entries. Jan 17 12:17:16.286353 systemd-journald[1149]: System Journal (/var/log/journal/89b9f1a62a2d47c680164bda6cc1036b) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:17:16.466650 systemd-journald[1149]: Received client request to flush runtime journal. Jan 17 12:17:16.307970 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:17:16.323519 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:17:16.323816 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:17:16.330235 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:17:16.331478 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:17:16.337207 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:17:16.361229 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:17:16.435237 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:16.447435 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:17:16.450117 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 17 12:17:16.450140 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 17 12:17:16.467748 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:17:16.479056 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:17:16.483175 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:17:16.509870 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:17:16.517600 udevadm[1207]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:17:16.563407 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:17:16.570831 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:17:16.613376 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jan 17 12:17:16.613881 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jan 17 12:17:16.623285 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:17:17.311309 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:17:17.321867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:17.371299 systemd-udevd[1225]: Using default interface naming scheme 'v255'. Jan 17 12:17:17.404628 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:17.418915 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:17:17.450005 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:17:17.540712 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:17:17.543862 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:17:17.562665 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:17.562945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:17.571846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:17.575928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:17.588754 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:17.590626 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:17:17.590887 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:17:17.591062 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:17.592189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:17.594558 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:17.615128 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:17.615434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:17.616865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:17.617068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:17.625913 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:17:17.626029 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:17:17.711494 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1229) Jan 17 12:17:17.720989 systemd-networkd[1231]: lo: Link UP Jan 17 12:17:17.721441 systemd-networkd[1231]: lo: Gained carrier Jan 17 12:17:17.726023 systemd-networkd[1231]: Enumeration completed Jan 17 12:17:17.726253 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:17:17.727345 systemd-networkd[1231]: eth0: Configuring with /run/systemd/network/10-de:6b:2b:d2:95:4c.network. Jan 17 12:17:17.731199 systemd-networkd[1231]: eth1: Configuring with /run/systemd/network/10-22:b1:de:50:5a:1b.network. Jan 17 12:17:17.732203 systemd-networkd[1231]: eth0: Link UP Jan 17 12:17:17.732377 systemd-networkd[1231]: eth0: Gained carrier Jan 17 12:17:17.735777 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:17:17.736933 systemd-networkd[1231]: eth1: Link UP Jan 17 12:17:17.736940 systemd-networkd[1231]: eth1: Gained carrier Jan 17 12:17:17.808534 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 12:17:17.839526 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 12:17:17.853501 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:17:17.881550 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 12:17:17.889131 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:17:17.918507 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:17:17.935161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:17.989488 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 12:17:17.992502 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 12:17:18.008537 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:17:18.009859 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:17:18.009947 kernel: [drm] features: -context_init Jan 17 12:17:18.014507 kernel: [drm] number of scanouts: 1 Jan 17 12:17:18.017507 kernel: [drm] number of cap sets: 0 Jan 17 12:17:18.023518 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 12:17:18.030506 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:17:18.033781 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:17:18.055529 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:17:18.068603 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:18.068993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:18.080021 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:18.088682 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:18.089149 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:18.094820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:18.195185 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:18.238511 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:17:18.274164 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:17:18.285797 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:17:18.323432 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:17:18.362050 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:17:18.363521 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:18.368827 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:17:18.393316 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:17:18.428557 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:17:18.430264 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:17:18.442861 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 12:17:18.443177 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:17:18.443246 systemd[1]: Reached target machines.target - Containers. Jan 17 12:17:18.447723 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:17:18.473924 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 12:17:18.481479 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 12:17:18.484208 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:17:18.487351 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:17:18.509843 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:17:18.516726 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:17:18.520851 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:18.529901 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:17:18.544826 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:17:18.550599 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:17:18.553866 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:17:18.590513 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 12:17:18.608160 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:17:18.609510 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:17:18.650773 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:17:18.688730 kernel: loop1: detected capacity change from 0 to 8 Jan 17 12:17:18.720533 kernel: loop2: detected capacity change from 0 to 142488 Jan 17 12:17:18.780749 systemd-networkd[1231]: eth0: Gained IPv6LL Jan 17 12:17:18.792759 kernel: loop3: detected capacity change from 0 to 211296 Jan 17 12:17:18.792633 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:17:18.849609 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 12:17:18.902969 kernel: loop5: detected capacity change from 0 to 8 Jan 17 12:17:18.903202 kernel: loop6: detected capacity change from 0 to 142488 Jan 17 12:17:18.947142 kernel: loop7: detected capacity change from 0 to 211296 Jan 17 12:17:18.971630 (sd-merge)[1322]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 12:17:18.979552 (sd-merge)[1322]: Merged extensions into '/usr'. Jan 17 12:17:18.987540 systemd[1]: Reloading requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:17:18.987572 systemd[1]: Reloading... Jan 17 12:17:19.163533 zram_generator::config[1349]: No configuration found. Jan 17 12:17:19.233673 systemd-networkd[1231]: eth1: Gained IPv6LL Jan 17 12:17:19.512992 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:19.613151 systemd[1]: Reloading finished in 624 ms. Jan 17 12:17:19.631585 ldconfig[1306]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:17:19.637445 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:17:19.642344 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:17:19.662916 systemd[1]: Starting ensure-sysext.service... Jan 17 12:17:19.677228 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:17:19.701951 systemd[1]: Reloading requested from client PID 1400 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:17:19.702013 systemd[1]: Reloading... Jan 17 12:17:19.762104 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:17:19.763753 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:17:19.765831 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:17:19.766720 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. Jan 17 12:17:19.767180 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. Jan 17 12:17:19.776130 systemd-tmpfiles[1401]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:17:19.776154 systemd-tmpfiles[1401]: Skipping /boot Jan 17 12:17:19.807251 systemd-tmpfiles[1401]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:17:19.807275 systemd-tmpfiles[1401]: Skipping /boot Jan 17 12:17:19.900954 zram_generator::config[1431]: No configuration found. Jan 17 12:17:20.206928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:20.308305 systemd[1]: Reloading finished in 605 ms. Jan 17 12:17:20.333024 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:20.364812 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:17:20.377773 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:17:20.384613 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:17:20.400737 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:17:20.430919 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:17:20.458873 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:20.459259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:20.469303 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:20.482567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:20.518018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:20.522334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:20.529955 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:20.540128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:20.540528 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:20.551588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:20.557781 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:20.580329 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:17:20.589895 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:20.590252 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:20.619709 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:17:20.629106 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:20.631265 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:20.639030 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:20.657301 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:20.689018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:20.690082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:20.695279 augenrules[1518]: No rules Jan 17 12:17:20.700699 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:17:20.704312 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:17:20.711033 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:20.721569 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:17:20.733753 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:20.734030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:20.740387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:20.742272 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:20.751572 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:20.751934 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:20.771802 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:17:20.782816 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:17:20.797995 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:20.798415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:20.806041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:20.819531 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:17:20.829513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:20.849800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:20.853136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:20.853244 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:17:20.853277 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:20.859603 systemd[1]: Finished ensure-sysext.service. Jan 17 12:17:20.863279 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:20.863579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:20.873305 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:20.874819 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:20.882095 systemd-resolved[1483]: Positive Trust Anchors: Jan 17 12:17:20.882110 systemd-resolved[1483]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:17:20.882160 systemd-resolved[1483]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:17:20.890538 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:20.890857 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:20.897520 systemd-resolved[1483]: Using system hostname 'ci-4081.3.0-a-89c7b8b189'. Jan 17 12:17:20.903122 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:17:20.916074 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:17:20.916375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:17:20.920155 systemd[1]: Reached target network.target - Network. Jan 17 12:17:20.920933 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:17:20.927237 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:20.929101 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:17:20.929222 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:17:20.953910 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:17:21.049106 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:17:21.050114 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:17:21.055717 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:17:21.058280 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:17:21.060151 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:17:21.063252 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:17:21.063339 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:17:21.064238 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:17:21.067645 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:17:21.084559 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:17:21.085398 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:17:21.089627 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:17:21.094261 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:17:21.105682 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:17:21.120514 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:17:21.121669 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:17:21.125371 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:17:21.126837 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:17:21.126940 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:17:21.126980 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:17:21.142198 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:17:21.153000 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:17:21.169809 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:17:21.192746 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:17:21.201346 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:17:21.205148 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:17:21.217552 coreos-metadata[1558]: Jan 17 12:17:21.215 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:17:21.225033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:21.254204 jq[1562]: false Jan 17 12:17:21.254911 coreos-metadata[1558]: Jan 17 12:17:21.242 INFO Fetch successful Jan 17 12:17:21.275102 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:17:21.291735 dbus-daemon[1559]: [system] SELinux support is enabled Jan 17 12:17:21.302903 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:17:21.324645 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:17:21.338824 extend-filesystems[1563]: Found loop4 Jan 17 12:17:21.342759 extend-filesystems[1563]: Found loop5 Jan 17 12:17:21.342759 extend-filesystems[1563]: Found loop6 Jan 17 12:17:21.342759 extend-filesystems[1563]: Found loop7 Jan 17 12:17:21.342759 extend-filesystems[1563]: Found vda Jan 17 12:17:21.342759 extend-filesystems[1563]: Found vda1 Jan 17 12:17:21.342759 extend-filesystems[1563]: Found vda2 Jan 17 12:17:21.342759 extend-filesystems[1563]: Found vda3 Jan 17 12:17:21.342759 extend-filesystems[1563]: Found usr Jan 17 12:17:21.342759 extend-filesystems[1563]: Found vda4 Jan 17 12:17:21.342759 extend-filesystems[1563]: Found vda6 Jan 17 12:17:21.923249 extend-filesystems[1563]: Found vda7 Jan 17 12:17:21.923249 extend-filesystems[1563]: Found vda9 Jan 17 12:17:21.923249 extend-filesystems[1563]: Checking size of /dev/vda9 Jan 17 12:17:21.349616 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:17:21.864443 systemd-timesyncd[1552]: Contacted time server 23.141.40.123:123 (0.flatcar.pool.ntp.org). Jan 17 12:17:21.864567 systemd-timesyncd[1552]: Initial clock synchronization to Fri 2025-01-17 12:17:21.864097 UTC. Jan 17 12:17:21.865455 systemd-resolved[1483]: Clock change detected. Flushing caches. Jan 17 12:17:21.881923 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:17:21.940442 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:17:21.944764 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:17:21.959376 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:17:21.984755 extend-filesystems[1563]: Resized partition /dev/vda9 Jan 17 12:17:21.992703 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:17:22.005267 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:17:22.023012 extend-filesystems[1594]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:17:22.051053 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 12:17:22.047576 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:17:22.048024 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:17:22.066913 jq[1593]: true Jan 17 12:17:22.066789 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:17:22.070840 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:17:22.101662 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:17:22.102486 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:17:22.166396 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:17:22.178471 update_engine[1587]: I20250117 12:17:22.170197 1587 main.cc:92] Flatcar Update Engine starting Jan 17 12:17:22.187300 (ntainerd)[1606]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:17:22.212208 update_engine[1587]: I20250117 12:17:22.204324 1587 update_check_scheduler.cc:74] Next update check in 6m5s Jan 17 12:17:22.232437 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:17:22.232577 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:17:22.234135 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:17:22.234387 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 12:17:22.234465 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:17:22.239263 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:17:22.242469 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:17:22.258522 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:17:22.271902 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:17:22.288551 jq[1604]: true Jan 17 12:17:22.291568 tar[1602]: linux-amd64/helm Jan 17 12:17:22.289320 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:17:22.366680 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:17:22.483118 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1631) Jan 17 12:17:22.519704 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 12:17:22.560391 extend-filesystems[1594]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:17:22.560391 extend-filesystems[1594]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 12:17:22.560391 extend-filesystems[1594]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 12:17:22.608373 extend-filesystems[1563]: Resized filesystem in /dev/vda9 Jan 17 12:17:22.608373 extend-filesystems[1563]: Found vdb Jan 17 12:17:22.565555 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:17:22.662502 bash[1653]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:17:22.566016 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:17:22.581070 systemd-logind[1585]: New seat seat0. Jan 17 12:17:22.618495 systemd-logind[1585]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:17:22.618554 systemd-logind[1585]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:17:22.652471 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:17:22.664758 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:17:22.685522 systemd[1]: Starting sshkeys.service... Jan 17 12:17:22.796478 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:17:22.814817 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:17:22.887641 coreos-metadata[1664]: Jan 17 12:17:22.871 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:17:22.894015 coreos-metadata[1664]: Jan 17 12:17:22.891 INFO Fetch successful Jan 17 12:17:22.950109 unknown[1664]: wrote ssh authorized keys file for user: core Jan 17 12:17:23.021704 locksmithd[1624]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:17:23.053918 update-ssh-keys[1675]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:17:23.056577 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:17:23.062133 systemd[1]: Finished sshkeys.service. Jan 17 12:17:23.370227 containerd[1606]: time="2025-01-17T12:17:23.369579803Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:17:23.494012 containerd[1606]: time="2025-01-17T12:17:23.492398751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:23.501928 sshd_keygen[1600]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:17:23.512516 containerd[1606]: time="2025-01-17T12:17:23.512423940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:23.512516 containerd[1606]: time="2025-01-17T12:17:23.512504375Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:17:23.512670 containerd[1606]: time="2025-01-17T12:17:23.512554643Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:17:23.512805 containerd[1606]: time="2025-01-17T12:17:23.512782150Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:17:23.512833 containerd[1606]: time="2025-01-17T12:17:23.512811138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:23.512924 containerd[1606]: time="2025-01-17T12:17:23.512884751Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:23.512946 containerd[1606]: time="2025-01-17T12:17:23.512929441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:23.518183 containerd[1606]: time="2025-01-17T12:17:23.518106414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:23.518183 containerd[1606]: time="2025-01-17T12:17:23.518173180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:23.518183 containerd[1606]: time="2025-01-17T12:17:23.518194360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:23.518384 containerd[1606]: time="2025-01-17T12:17:23.518207599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:23.521001 containerd[1606]: time="2025-01-17T12:17:23.518438079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:23.521001 containerd[1606]: time="2025-01-17T12:17:23.520493705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:23.521720 containerd[1606]: time="2025-01-17T12:17:23.521676609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:23.521720 containerd[1606]: time="2025-01-17T12:17:23.521717544Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:17:23.522089 containerd[1606]: time="2025-01-17T12:17:23.521885944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:17:23.523623 containerd[1606]: time="2025-01-17T12:17:23.523576074Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:17:23.540679 containerd[1606]: time="2025-01-17T12:17:23.540588223Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:17:23.542986 containerd[1606]: time="2025-01-17T12:17:23.542888103Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:17:23.543123 containerd[1606]: time="2025-01-17T12:17:23.543029203Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:17:23.543162 containerd[1606]: time="2025-01-17T12:17:23.543108153Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:17:23.543162 containerd[1606]: time="2025-01-17T12:17:23.543150723Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:17:23.543464 containerd[1606]: time="2025-01-17T12:17:23.543432039Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:17:23.544150 containerd[1606]: time="2025-01-17T12:17:23.544114599Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:17:23.544371 containerd[1606]: time="2025-01-17T12:17:23.544345192Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:17:23.544371 containerd[1606]: time="2025-01-17T12:17:23.544379353Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:17:23.544479 containerd[1606]: time="2025-01-17T12:17:23.544400895Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:17:23.544479 containerd[1606]: time="2025-01-17T12:17:23.544427078Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:17:23.544517 containerd[1606]: time="2025-01-17T12:17:23.544476798Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:17:23.544517 containerd[1606]: time="2025-01-17T12:17:23.544504063Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:17:23.544568 containerd[1606]: time="2025-01-17T12:17:23.544528105Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:17:23.544568 containerd[1606]: time="2025-01-17T12:17:23.544552879Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:17:23.544620 containerd[1606]: time="2025-01-17T12:17:23.544575715Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:17:23.544620 containerd[1606]: time="2025-01-17T12:17:23.544595150Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:17:23.544656 containerd[1606]: time="2025-01-17T12:17:23.544618051Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:17:23.544676 containerd[1606]: time="2025-01-17T12:17:23.544653343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.544696 containerd[1606]: time="2025-01-17T12:17:23.544676404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.545368 containerd[1606]: time="2025-01-17T12:17:23.544743362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.545368 containerd[1606]: time="2025-01-17T12:17:23.544860349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.545368 containerd[1606]: time="2025-01-17T12:17:23.544888187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.545368 containerd[1606]: time="2025-01-17T12:17:23.544912405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.548495 containerd[1606]: time="2025-01-17T12:17:23.544939657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.548751 containerd[1606]: time="2025-01-17T12:17:23.548586500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.548751 containerd[1606]: time="2025-01-17T12:17:23.548645077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.548751 containerd[1606]: time="2025-01-17T12:17:23.548693706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.548751 containerd[1606]: time="2025-01-17T12:17:23.548725686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.548751035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.548784044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.548816180Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.548860566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.548880135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.548901773Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.549012100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.549048941Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.549072323Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.549095845Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.549113124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.549133988Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.549151475Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:17:23.549290 containerd[1606]: time="2025-01-17T12:17:23.549170516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:17:23.550881 containerd[1606]: time="2025-01-17T12:17:23.549689680Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:17:23.550881 containerd[1606]: time="2025-01-17T12:17:23.549815765Z" level=info msg="Connect containerd service" Jan 17 12:17:23.550881 containerd[1606]: time="2025-01-17T12:17:23.549879254Z" level=info msg="using legacy CRI server" Jan 17 12:17:23.550881 containerd[1606]: time="2025-01-17T12:17:23.549890410Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:17:23.556055 containerd[1606]: time="2025-01-17T12:17:23.555367540Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:17:23.563194 containerd[1606]: time="2025-01-17T12:17:23.560328195Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:17:23.563194 containerd[1606]: time="2025-01-17T12:17:23.560960427Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:17:23.563194 containerd[1606]: time="2025-01-17T12:17:23.561048855Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:17:23.563194 containerd[1606]: time="2025-01-17T12:17:23.561098363Z" level=info msg="Start subscribing containerd event" Jan 17 12:17:23.563194 containerd[1606]: time="2025-01-17T12:17:23.561149401Z" level=info msg="Start recovering state" Jan 17 12:17:23.563194 containerd[1606]: time="2025-01-17T12:17:23.561232177Z" level=info msg="Start event monitor" Jan 17 12:17:23.563194 containerd[1606]: time="2025-01-17T12:17:23.561249263Z" level=info msg="Start snapshots syncer" Jan 17 12:17:23.563194 containerd[1606]: time="2025-01-17T12:17:23.561262577Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:17:23.563194 containerd[1606]: time="2025-01-17T12:17:23.561270389Z" level=info msg="Start streaming server" Jan 17 12:17:23.563194 containerd[1606]: time="2025-01-17T12:17:23.561352887Z" level=info msg="containerd successfully booted in 0.200698s" Jan 17 12:17:23.561573 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:17:23.623805 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:17:23.641546 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:17:23.655498 systemd[1]: Started sshd@0-143.198.98.155:22-139.178.68.195:47910.service - OpenSSH per-connection server daemon (139.178.68.195:47910). Jan 17 12:17:23.695556 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:17:23.695876 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:17:23.711548 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:17:23.759763 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:17:23.775569 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:17:23.782530 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:17:23.786017 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:17:23.837468 sshd[1696]: Accepted publickey for core from 139.178.68.195 port 47910 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:17:23.846022 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:23.880928 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:17:23.891206 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:17:23.902510 systemd-logind[1585]: New session 1 of user core. Jan 17 12:17:23.938055 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:17:23.965676 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:17:24.003581 (systemd)[1712]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:17:24.225385 systemd[1712]: Queued start job for default target default.target. Jan 17 12:17:24.226072 systemd[1712]: Created slice app.slice - User Application Slice. Jan 17 12:17:24.226099 systemd[1712]: Reached target paths.target - Paths. Jan 17 12:17:24.226112 systemd[1712]: Reached target timers.target - Timers. Jan 17 12:17:24.232221 systemd[1712]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:17:24.267741 systemd[1712]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:17:24.267841 systemd[1712]: Reached target sockets.target - Sockets. Jan 17 12:17:24.267862 systemd[1712]: Reached target basic.target - Basic System. Jan 17 12:17:24.267950 systemd[1712]: Reached target default.target - Main User Target. Jan 17 12:17:24.268020 systemd[1712]: Startup finished in 246ms. Jan 17 12:17:24.268579 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:17:24.280043 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:17:24.375194 systemd[1]: Started sshd@1-143.198.98.155:22-139.178.68.195:47920.service - OpenSSH per-connection server daemon (139.178.68.195:47920). Jan 17 12:17:24.398580 tar[1602]: linux-amd64/LICENSE Jan 17 12:17:24.401479 tar[1602]: linux-amd64/README.md Jan 17 12:17:24.451396 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:17:24.482922 sshd[1724]: Accepted publickey for core from 139.178.68.195 port 47920 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:17:24.485602 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:24.494698 systemd-logind[1585]: New session 2 of user core. Jan 17 12:17:24.501424 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:17:24.582444 sshd[1724]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:24.599749 systemd[1]: Started sshd@2-143.198.98.155:22-139.178.68.195:47922.service - OpenSSH per-connection server daemon (139.178.68.195:47922). Jan 17 12:17:24.602079 systemd[1]: sshd@1-143.198.98.155:22-139.178.68.195:47920.service: Deactivated successfully. Jan 17 12:17:24.610877 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:17:24.611057 systemd-logind[1585]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:17:24.622516 systemd-logind[1585]: Removed session 2. Jan 17 12:17:24.674106 sshd[1734]: Accepted publickey for core from 139.178.68.195 port 47922 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:17:24.676266 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:24.682521 systemd-logind[1585]: New session 3 of user core. Jan 17 12:17:24.690475 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:17:24.768226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:24.774606 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:17:24.781099 systemd[1]: Startup finished in 11.754s (kernel) + 9.688s (userspace) = 21.443s. Jan 17 12:17:24.781799 sshd[1734]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:24.788623 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:24.791353 systemd[1]: sshd@2-143.198.98.155:22-139.178.68.195:47922.service: Deactivated successfully. Jan 17 12:17:24.800049 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:17:24.805280 systemd-logind[1585]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:17:24.809834 systemd-logind[1585]: Removed session 3. Jan 17 12:17:25.917495 kubelet[1749]: E0117 12:17:25.917375 1749 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:25.927650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:25.927958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:17:34.792457 systemd[1]: Started sshd@3-143.198.98.155:22-139.178.68.195:44466.service - OpenSSH per-connection server daemon (139.178.68.195:44466). Jan 17 12:17:34.877114 sshd[1766]: Accepted publickey for core from 139.178.68.195 port 44466 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:17:34.878120 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:34.887278 systemd-logind[1585]: New session 4 of user core. Jan 17 12:17:34.896688 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:17:34.981756 sshd[1766]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:34.997990 systemd[1]: sshd@3-143.198.98.155:22-139.178.68.195:44466.service: Deactivated successfully. Jan 17 12:17:35.010453 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:17:35.011909 systemd-logind[1585]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:17:35.049014 systemd[1]: Started sshd@4-143.198.98.155:22-139.178.68.195:44472.service - OpenSSH per-connection server daemon (139.178.68.195:44472). Jan 17 12:17:35.051925 systemd-logind[1585]: Removed session 4. Jan 17 12:17:35.114179 sshd[1774]: Accepted publickey for core from 139.178.68.195 port 44472 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:17:35.115496 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:35.126618 systemd-logind[1585]: New session 5 of user core. Jan 17 12:17:35.139126 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:17:35.227185 sshd[1774]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:35.244184 systemd[1]: Started sshd@5-143.198.98.155:22-139.178.68.195:44480.service - OpenSSH per-connection server daemon (139.178.68.195:44480). Jan 17 12:17:35.250546 systemd[1]: sshd@4-143.198.98.155:22-139.178.68.195:44472.service: Deactivated successfully. Jan 17 12:17:35.269833 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:17:35.269932 systemd-logind[1585]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:17:35.271788 systemd-logind[1585]: Removed session 5. Jan 17 12:17:35.327205 sshd[1779]: Accepted publickey for core from 139.178.68.195 port 44480 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:17:35.328837 sshd[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:35.339155 systemd-logind[1585]: New session 6 of user core. Jan 17 12:17:35.353678 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:17:35.428823 sshd[1779]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:35.442598 systemd[1]: Started sshd@6-143.198.98.155:22-139.178.68.195:44486.service - OpenSSH per-connection server daemon (139.178.68.195:44486). Jan 17 12:17:35.443367 systemd[1]: sshd@5-143.198.98.155:22-139.178.68.195:44480.service: Deactivated successfully. Jan 17 12:17:35.449762 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:17:35.453263 systemd-logind[1585]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:17:35.465229 systemd-logind[1585]: Removed session 6. Jan 17 12:17:35.531124 sshd[1787]: Accepted publickey for core from 139.178.68.195 port 44486 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:17:35.535233 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:35.549126 systemd-logind[1585]: New session 7 of user core. Jan 17 12:17:35.565232 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:17:35.671303 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:17:35.671826 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:36.178839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:17:36.194505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:36.507535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:36.540238 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:36.734419 kubelet[1820]: E0117 12:17:36.734049 1820 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:36.737799 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:17:36.742316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:36.742765 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:17:36.760055 (dockerd)[1829]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:17:37.681233 dockerd[1829]: time="2025-01-17T12:17:37.680707134Z" level=info msg="Starting up" Jan 17 12:17:37.894046 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3167534614-merged.mount: Deactivated successfully. Jan 17 12:17:38.015417 dockerd[1829]: time="2025-01-17T12:17:38.014702531Z" level=info msg="Loading containers: start." Jan 17 12:17:38.304227 kernel: Initializing XFRM netlink socket Jan 17 12:17:38.468655 systemd-networkd[1231]: docker0: Link UP Jan 17 12:17:38.503179 dockerd[1829]: time="2025-01-17T12:17:38.501982379Z" level=info msg="Loading containers: done." Jan 17 12:17:38.543604 dockerd[1829]: time="2025-01-17T12:17:38.543534959Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:17:38.543870 dockerd[1829]: time="2025-01-17T12:17:38.543671019Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:17:38.543870 dockerd[1829]: time="2025-01-17T12:17:38.543792359Z" level=info msg="Daemon has completed initialization" Jan 17 12:17:38.626826 dockerd[1829]: time="2025-01-17T12:17:38.626727987Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:17:38.627233 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:17:40.034264 containerd[1606]: time="2025-01-17T12:17:40.034193895Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:17:40.944505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount317784096.mount: Deactivated successfully. Jan 17 12:17:43.752861 containerd[1606]: time="2025-01-17T12:17:43.749555192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:43.754481 containerd[1606]: time="2025-01-17T12:17:43.754405396Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140730" Jan 17 12:17:43.755709 containerd[1606]: time="2025-01-17T12:17:43.755627683Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:43.767357 containerd[1606]: time="2025-01-17T12:17:43.766646528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:43.770286 containerd[1606]: time="2025-01-17T12:17:43.770185009Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 3.735914174s" Jan 17 12:17:43.770286 containerd[1606]: time="2025-01-17T12:17:43.770271053Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:17:43.816617 containerd[1606]: time="2025-01-17T12:17:43.815274639Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:17:46.953788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:17:46.962434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:47.207427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:47.231657 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:47.412768 kubelet[2056]: E0117 12:17:47.412678 2056 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:47.428140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:47.428408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:17:47.602670 containerd[1606]: time="2025-01-17T12:17:47.600665363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:47.602670 containerd[1606]: time="2025-01-17T12:17:47.602398451Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216641" Jan 17 12:17:47.604279 containerd[1606]: time="2025-01-17T12:17:47.604221144Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:47.609814 containerd[1606]: time="2025-01-17T12:17:47.609739919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:47.611608 containerd[1606]: time="2025-01-17T12:17:47.611536199Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 3.796178132s" Jan 17 12:17:47.611608 containerd[1606]: time="2025-01-17T12:17:47.611612056Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:17:47.653541 containerd[1606]: time="2025-01-17T12:17:47.653483683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:17:47.657170 systemd-resolved[1483]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 12:17:49.500142 containerd[1606]: time="2025-01-17T12:17:49.500038487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:49.503241 containerd[1606]: time="2025-01-17T12:17:49.503121638Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332841" Jan 17 12:17:49.507021 containerd[1606]: time="2025-01-17T12:17:49.504257423Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:49.515683 containerd[1606]: time="2025-01-17T12:17:49.515585295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:49.521995 containerd[1606]: time="2025-01-17T12:17:49.521386783Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.867837118s" Jan 17 12:17:49.521995 containerd[1606]: time="2025-01-17T12:17:49.521482441Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:17:49.569592 containerd[1606]: time="2025-01-17T12:17:49.569366406Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:17:50.709277 systemd-resolved[1483]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 17 12:17:51.471905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2353166814.mount: Deactivated successfully. Jan 17 12:17:52.548912 containerd[1606]: time="2025-01-17T12:17:52.548625351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:52.551988 containerd[1606]: time="2025-01-17T12:17:52.551852907Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:17:52.555883 containerd[1606]: time="2025-01-17T12:17:52.555758278Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:52.558907 containerd[1606]: time="2025-01-17T12:17:52.558798658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:52.560091 containerd[1606]: time="2025-01-17T12:17:52.559791274Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 2.990343402s" Jan 17 12:17:52.560091 containerd[1606]: time="2025-01-17T12:17:52.559846599Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:17:52.604703 containerd[1606]: time="2025-01-17T12:17:52.604335166Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:17:53.295633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335007132.mount: Deactivated successfully. Jan 17 12:17:55.516024 containerd[1606]: time="2025-01-17T12:17:55.514653804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:55.518090 containerd[1606]: time="2025-01-17T12:17:55.518002249Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:17:55.520262 containerd[1606]: time="2025-01-17T12:17:55.520177334Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:55.537006 containerd[1606]: time="2025-01-17T12:17:55.536379657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:55.546502 containerd[1606]: time="2025-01-17T12:17:55.546417190Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.942007767s" Jan 17 12:17:55.546773 containerd[1606]: time="2025-01-17T12:17:55.546746021Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:17:55.640210 containerd[1606]: time="2025-01-17T12:17:55.640154158Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:17:55.656468 systemd-resolved[1483]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 17 12:17:56.316113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437604680.mount: Deactivated successfully. Jan 17 12:17:56.340876 containerd[1606]: time="2025-01-17T12:17:56.338355502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:56.343694 containerd[1606]: time="2025-01-17T12:17:56.342604782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:17:56.345710 containerd[1606]: time="2025-01-17T12:17:56.345198654Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:56.351866 containerd[1606]: time="2025-01-17T12:17:56.350880343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:56.354022 containerd[1606]: time="2025-01-17T12:17:56.353919336Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 713.486661ms" Jan 17 12:17:56.354022 containerd[1606]: time="2025-01-17T12:17:56.354010485Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:17:56.428267 containerd[1606]: time="2025-01-17T12:17:56.427887109Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:17:57.115250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684315418.mount: Deactivated successfully. Jan 17 12:17:57.452908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:17:57.479857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:57.854378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:57.866222 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:58.331649 kubelet[2172]: E0117 12:17:58.331455 2172 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:58.370055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:58.370267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:18:01.571826 containerd[1606]: time="2025-01-17T12:18:01.568750142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:01.576219 containerd[1606]: time="2025-01-17T12:18:01.574726027Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 17 12:18:01.580773 containerd[1606]: time="2025-01-17T12:18:01.577877080Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:01.588910 containerd[1606]: time="2025-01-17T12:18:01.588652666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:01.598165 containerd[1606]: time="2025-01-17T12:18:01.594170716Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.166172141s" Jan 17 12:18:01.598419 containerd[1606]: time="2025-01-17T12:18:01.598194697Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:18:05.855491 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:05.869668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:05.915781 systemd[1]: Reloading requested from client PID 2279 ('systemctl') (unit session-7.scope)... Jan 17 12:18:05.915927 systemd[1]: Reloading... Jan 17 12:18:06.140049 zram_generator::config[2321]: No configuration found. Jan 17 12:18:06.340398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:18:06.431221 systemd[1]: Reloading finished in 514 ms. Jan 17 12:18:06.530433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:06.551521 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:06.553356 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:18:06.553761 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:06.557368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:06.827397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:06.830580 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:18:06.973102 kubelet[2387]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:06.973102 kubelet[2387]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:18:06.973102 kubelet[2387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:06.991033 kubelet[2387]: I0117 12:18:06.990793 2387 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:18:07.342224 update_engine[1587]: I20250117 12:18:07.338476 1587 update_attempter.cc:509] Updating boot flags... Jan 17 12:18:07.429009 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2401) Jan 17 12:18:07.463043 kubelet[2387]: I0117 12:18:07.461218 2387 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:18:07.463043 kubelet[2387]: I0117 12:18:07.461261 2387 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:18:07.463043 kubelet[2387]: I0117 12:18:07.461669 2387 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:18:07.519139 kubelet[2387]: E0117 12:18:07.516351 2387 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.98.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:07.526997 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2402) Jan 17 12:18:07.529021 kubelet[2387]: I0117 12:18:07.528707 2387 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:18:07.589713 kubelet[2387]: I0117 12:18:07.589669 2387 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:18:07.595241 kubelet[2387]: I0117 12:18:07.595055 2387 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:18:07.596817 kubelet[2387]: I0117 12:18:07.596758 2387 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:18:07.598413 kubelet[2387]: I0117 12:18:07.598375 2387 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:18:07.598413 kubelet[2387]: I0117 12:18:07.598460 2387 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:18:07.599080 kubelet[2387]: I0117 12:18:07.598904 2387 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:07.599317 kubelet[2387]: I0117 12:18:07.599255 2387 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:18:07.599487 kubelet[2387]: I0117 12:18:07.599417 2387 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:18:07.599688 kubelet[2387]: I0117 12:18:07.599581 2387 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:18:07.599688 kubelet[2387]: I0117 12:18:07.599612 2387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:18:07.615228 kubelet[2387]: W0117 12:18:07.614913 2387 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://143.198.98.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-89c7b8b189&limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:07.615228 kubelet[2387]: E0117 12:18:07.615036 2387 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.98.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-89c7b8b189&limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:07.615228 kubelet[2387]: W0117 12:18:07.615144 2387 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://143.198.98.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:07.615228 kubelet[2387]: E0117 12:18:07.615193 2387 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.98.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:07.619569 kubelet[2387]: I0117 12:18:07.618832 2387 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:18:07.625184 kubelet[2387]: I0117 12:18:07.625129 2387 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:18:07.625363 kubelet[2387]: W0117 12:18:07.625271 2387 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:18:07.627523 kubelet[2387]: I0117 12:18:07.627480 2387 server.go:1256] "Started kubelet" Jan 17 12:18:07.628102 kubelet[2387]: I0117 12:18:07.628072 2387 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:18:07.628821 kubelet[2387]: I0117 12:18:07.628796 2387 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:18:07.630657 kubelet[2387]: I0117 12:18:07.630631 2387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:18:07.644009 kubelet[2387]: I0117 12:18:07.642781 2387 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:18:07.644009 kubelet[2387]: E0117 12:18:07.643057 2387 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.98.155:6443/api/v1/namespaces/default/events\": dial tcp 143.198.98.155:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-89c7b8b189.181b7a0f59944747 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-89c7b8b189,UID:ci-4081.3.0-a-89c7b8b189,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-89c7b8b189,},FirstTimestamp:2025-01-17 12:18:07.627446087 +0000 UTC m=+0.789859316,LastTimestamp:2025-01-17 12:18:07.627446087 +0000 UTC m=+0.789859316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-89c7b8b189,}" Jan 17 12:18:07.644009 kubelet[2387]: I0117 12:18:07.643557 2387 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:18:07.644589 kubelet[2387]: I0117 12:18:07.644566 2387 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:18:07.646838 kubelet[2387]: I0117 12:18:07.646798 2387 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:18:07.648003 kubelet[2387]: I0117 12:18:07.647941 2387 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:18:07.650263 kubelet[2387]: W0117 12:18:07.650195 2387 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://143.198.98.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:07.650263 kubelet[2387]: E0117 12:18:07.650271 2387 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.98.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:07.650450 kubelet[2387]: E0117 12:18:07.650377 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.98.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-89c7b8b189?timeout=10s\": dial tcp 143.198.98.155:6443: connect: connection refused" interval="200ms" Jan 17 12:18:07.651608 kubelet[2387]: I0117 12:18:07.651580 2387 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:18:07.651947 kubelet[2387]: I0117 12:18:07.651921 2387 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:18:07.655734 kubelet[2387]: E0117 12:18:07.655694 2387 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:18:07.656527 kubelet[2387]: I0117 12:18:07.656504 2387 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:18:07.670520 kubelet[2387]: I0117 12:18:07.670467 2387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:18:07.677340 kubelet[2387]: I0117 12:18:07.677165 2387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:18:07.677340 kubelet[2387]: I0117 12:18:07.677223 2387 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:18:07.677340 kubelet[2387]: I0117 12:18:07.677256 2387 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:18:07.677340 kubelet[2387]: E0117 12:18:07.677350 2387 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:18:07.692581 kubelet[2387]: W0117 12:18:07.691870 2387 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://143.198.98.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:07.692581 kubelet[2387]: E0117 12:18:07.691959 2387 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.98.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:07.695668 kubelet[2387]: I0117 12:18:07.695633 2387 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:18:07.695668 kubelet[2387]: I0117 12:18:07.695661 2387 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:18:07.695668 kubelet[2387]: I0117 12:18:07.695686 2387 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:07.701628 kubelet[2387]: I0117 12:18:07.701563 2387 policy_none.go:49] "None policy: Start" Jan 17 12:18:07.703137 kubelet[2387]: I0117 12:18:07.703099 2387 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:18:07.703304 kubelet[2387]: I0117 12:18:07.703173 2387 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:18:07.717013 kubelet[2387]: I0117 12:18:07.715775 2387 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:18:07.717013 kubelet[2387]: I0117 12:18:07.716396 2387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:18:07.719769 kubelet[2387]: E0117 12:18:07.719732 2387 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-89c7b8b189\" not found" Jan 17 12:18:07.748829 kubelet[2387]: I0117 12:18:07.748783 2387 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.749746 kubelet[2387]: E0117 12:18:07.749694 2387 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.98.155:6443/api/v1/nodes\": dial tcp 143.198.98.155:6443: connect: connection refused" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.778526 kubelet[2387]: I0117 12:18:07.778429 2387 topology_manager.go:215] "Topology Admit Handler" podUID="236bc78adfce230dd162be9fe87f153b" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.782885 kubelet[2387]: I0117 12:18:07.782810 2387 topology_manager.go:215] "Topology Admit Handler" podUID="23b4f217f47daedaa2da6787146058d1" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.785658 kubelet[2387]: I0117 12:18:07.784142 2387 topology_manager.go:215] "Topology Admit Handler" podUID="24b8e15bfd910494fec018e4d3fca98a" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.852398 kubelet[2387]: I0117 12:18:07.852222 2387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/236bc78adfce230dd162be9fe87f153b-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-89c7b8b189\" (UID: \"236bc78adfce230dd162be9fe87f153b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.852398 kubelet[2387]: I0117 12:18:07.852305 2387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/236bc78adfce230dd162be9fe87f153b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-89c7b8b189\" (UID: \"236bc78adfce230dd162be9fe87f153b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.852398 kubelet[2387]: I0117 12:18:07.852347 2387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/236bc78adfce230dd162be9fe87f153b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-89c7b8b189\" (UID: \"236bc78adfce230dd162be9fe87f153b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.852398 kubelet[2387]: I0117 12:18:07.852381 2387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23b4f217f47daedaa2da6787146058d1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-89c7b8b189\" (UID: \"23b4f217f47daedaa2da6787146058d1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.852686 kubelet[2387]: I0117 12:18:07.852416 2387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23b4f217f47daedaa2da6787146058d1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-89c7b8b189\" (UID: \"23b4f217f47daedaa2da6787146058d1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.852686 kubelet[2387]: I0117 12:18:07.852448 2387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23b4f217f47daedaa2da6787146058d1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-89c7b8b189\" (UID: \"23b4f217f47daedaa2da6787146058d1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.852686 kubelet[2387]: I0117 12:18:07.852483 2387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23b4f217f47daedaa2da6787146058d1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-89c7b8b189\" (UID: \"23b4f217f47daedaa2da6787146058d1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.852786 kubelet[2387]: E0117 12:18:07.852732 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.98.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-89c7b8b189?timeout=10s\": dial tcp 143.198.98.155:6443: connect: connection refused" interval="400ms" Jan 17 12:18:07.956934 kubelet[2387]: I0117 12:18:07.955515 2387 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.956934 kubelet[2387]: E0117 12:18:07.956110 2387 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.98.155:6443/api/v1/nodes\": dial tcp 143.198.98.155:6443: connect: connection refused" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.956934 kubelet[2387]: I0117 12:18:07.956385 2387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23b4f217f47daedaa2da6787146058d1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-89c7b8b189\" (UID: \"23b4f217f47daedaa2da6787146058d1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:07.956934 kubelet[2387]: I0117 12:18:07.956437 2387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24b8e15bfd910494fec018e4d3fca98a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-89c7b8b189\" (UID: \"24b8e15bfd910494fec018e4d3fca98a\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:08.090818 kubelet[2387]: E0117 12:18:08.090718 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:08.091951 containerd[1606]: time="2025-01-17T12:18:08.091890864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-89c7b8b189,Uid:236bc78adfce230dd162be9fe87f153b,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:08.093852 kubelet[2387]: E0117 12:18:08.093820 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:08.097702 containerd[1606]: time="2025-01-17T12:18:08.097282389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-89c7b8b189,Uid:23b4f217f47daedaa2da6787146058d1,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:08.100688 kubelet[2387]: E0117 12:18:08.100533 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:08.101668 containerd[1606]: time="2025-01-17T12:18:08.101342890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-89c7b8b189,Uid:24b8e15bfd910494fec018e4d3fca98a,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:08.253435 kubelet[2387]: E0117 12:18:08.253376 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.98.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-89c7b8b189?timeout=10s\": dial tcp 143.198.98.155:6443: connect: connection refused" interval="800ms" Jan 17 12:18:08.363386 kubelet[2387]: I0117 12:18:08.361666 2387 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:08.363386 kubelet[2387]: E0117 12:18:08.363077 2387 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.98.155:6443/api/v1/nodes\": dial tcp 143.198.98.155:6443: connect: connection refused" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:08.740730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1962789354.mount: Deactivated successfully. Jan 17 12:18:08.747876 kubelet[2387]: W0117 12:18:08.747779 2387 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://143.198.98.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:08.747876 kubelet[2387]: E0117 12:18:08.747878 2387 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.98.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:08.758152 containerd[1606]: time="2025-01-17T12:18:08.757493161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:08.761783 kubelet[2387]: W0117 12:18:08.761483 2387 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://143.198.98.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:08.761783 kubelet[2387]: E0117 12:18:08.761797 2387 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.98.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:08.767794 containerd[1606]: time="2025-01-17T12:18:08.767676867Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:18:08.781880 containerd[1606]: time="2025-01-17T12:18:08.781712139Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:08.785927 containerd[1606]: time="2025-01-17T12:18:08.785099096Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:08.785927 containerd[1606]: time="2025-01-17T12:18:08.785566585Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:18:08.787479 containerd[1606]: time="2025-01-17T12:18:08.787426306Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:08.802794 containerd[1606]: time="2025-01-17T12:18:08.795798072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:18:08.809596 containerd[1606]: time="2025-01-17T12:18:08.809526393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:08.811861 containerd[1606]: time="2025-01-17T12:18:08.811225778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 719.18666ms" Jan 17 12:18:08.816068 containerd[1606]: time="2025-01-17T12:18:08.815719261Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 714.256773ms" Jan 17 12:18:08.816791 containerd[1606]: time="2025-01-17T12:18:08.816699000Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 719.138638ms" Jan 17 12:18:09.055238 kubelet[2387]: E0117 12:18:09.054906 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.98.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-89c7b8b189?timeout=10s\": dial tcp 143.198.98.155:6443: connect: connection refused" interval="1.6s" Jan 17 12:18:09.074435 containerd[1606]: time="2025-01-17T12:18:09.072421970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:09.074435 containerd[1606]: time="2025-01-17T12:18:09.072533838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:09.074435 containerd[1606]: time="2025-01-17T12:18:09.072569264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:09.074435 containerd[1606]: time="2025-01-17T12:18:09.072770939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:09.081494 containerd[1606]: time="2025-01-17T12:18:09.081122723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:09.081494 containerd[1606]: time="2025-01-17T12:18:09.081334971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:09.083293 containerd[1606]: time="2025-01-17T12:18:09.082921853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:09.083999 containerd[1606]: time="2025-01-17T12:18:09.083762400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:09.087830 containerd[1606]: time="2025-01-17T12:18:09.086707292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:09.087830 containerd[1606]: time="2025-01-17T12:18:09.086797699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:09.087830 containerd[1606]: time="2025-01-17T12:18:09.086821520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:09.089088 containerd[1606]: time="2025-01-17T12:18:09.088669084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:09.165354 kubelet[2387]: I0117 12:18:09.164848 2387 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:09.173260 kubelet[2387]: W0117 12:18:09.170507 2387 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://143.198.98.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-89c7b8b189&limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:09.173260 kubelet[2387]: E0117 12:18:09.170613 2387 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.98.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-89c7b8b189&limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:09.173662 kubelet[2387]: E0117 12:18:09.173621 2387 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.98.155:6443/api/v1/nodes\": dial tcp 143.198.98.155:6443: connect: connection refused" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:09.218124 kubelet[2387]: W0117 12:18:09.217267 2387 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://143.198.98.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:09.224880 kubelet[2387]: E0117 12:18:09.221150 2387 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.98.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:09.231028 containerd[1606]: time="2025-01-17T12:18:09.230942425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-89c7b8b189,Uid:23b4f217f47daedaa2da6787146058d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"80c767dbeb9a3ec249de154c15d597554498058424441cbd197a31567ca982ae\"" Jan 17 12:18:09.239157 kubelet[2387]: E0117 12:18:09.238394 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:09.259552 containerd[1606]: time="2025-01-17T12:18:09.259315536Z" level=info msg="CreateContainer within sandbox \"80c767dbeb9a3ec249de154c15d597554498058424441cbd197a31567ca982ae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:18:09.276732 containerd[1606]: time="2025-01-17T12:18:09.276645510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-89c7b8b189,Uid:24b8e15bfd910494fec018e4d3fca98a,Namespace:kube-system,Attempt:0,} returns sandbox id \"46e3bf6d316ebf6d3aa9a9ba22c6f7d7cb843120d15a1e72b2e2b87153d560e0\"" Jan 17 12:18:09.280365 kubelet[2387]: E0117 12:18:09.280322 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:09.291018 containerd[1606]: time="2025-01-17T12:18:09.289703878Z" level=info msg="CreateContainer within sandbox \"46e3bf6d316ebf6d3aa9a9ba22c6f7d7cb843120d15a1e72b2e2b87153d560e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:18:09.291296 containerd[1606]: time="2025-01-17T12:18:09.291258120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-89c7b8b189,Uid:236bc78adfce230dd162be9fe87f153b,Namespace:kube-system,Attempt:0,} returns sandbox id \"30eda4125f8661c57291d164aeccec816c68a673541486a5ee7d32e80b0bb414\"" Jan 17 12:18:09.292659 kubelet[2387]: E0117 12:18:09.292453 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:09.299169 containerd[1606]: time="2025-01-17T12:18:09.298796796Z" level=info msg="CreateContainer within sandbox \"30eda4125f8661c57291d164aeccec816c68a673541486a5ee7d32e80b0bb414\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:18:09.308231 containerd[1606]: time="2025-01-17T12:18:09.307493890Z" level=info msg="CreateContainer within sandbox \"80c767dbeb9a3ec249de154c15d597554498058424441cbd197a31567ca982ae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"52e11c8194eb3dd78eaac1b839f41f658bfe5e333b3f73928197e33d5e8120a9\"" Jan 17 12:18:09.311925 containerd[1606]: time="2025-01-17T12:18:09.311850325Z" level=info msg="StartContainer for \"52e11c8194eb3dd78eaac1b839f41f658bfe5e333b3f73928197e33d5e8120a9\"" Jan 17 12:18:09.337145 containerd[1606]: time="2025-01-17T12:18:09.336879224Z" level=info msg="CreateContainer within sandbox \"46e3bf6d316ebf6d3aa9a9ba22c6f7d7cb843120d15a1e72b2e2b87153d560e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"451402c57b274abe774ea4d257452a496e1ef58ccf55529e8fbc9ce44f617e70\"" Jan 17 12:18:09.338511 containerd[1606]: time="2025-01-17T12:18:09.338397740Z" level=info msg="StartContainer for \"451402c57b274abe774ea4d257452a496e1ef58ccf55529e8fbc9ce44f617e70\"" Jan 17 12:18:09.372530 containerd[1606]: time="2025-01-17T12:18:09.371369130Z" level=info msg="CreateContainer within sandbox \"30eda4125f8661c57291d164aeccec816c68a673541486a5ee7d32e80b0bb414\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"557b073486a94b3eb4dac1c4e2ac8811ed3bd3fe6ce0bc9b6df8f3e5cc52c160\"" Jan 17 12:18:09.374215 containerd[1606]: time="2025-01-17T12:18:09.372902174Z" level=info msg="StartContainer for \"557b073486a94b3eb4dac1c4e2ac8811ed3bd3fe6ce0bc9b6df8f3e5cc52c160\"" Jan 17 12:18:09.531472 containerd[1606]: time="2025-01-17T12:18:09.531187428Z" level=info msg="StartContainer for \"52e11c8194eb3dd78eaac1b839f41f658bfe5e333b3f73928197e33d5e8120a9\" returns successfully" Jan 17 12:18:09.590888 containerd[1606]: time="2025-01-17T12:18:09.590713688Z" level=info msg="StartContainer for \"451402c57b274abe774ea4d257452a496e1ef58ccf55529e8fbc9ce44f617e70\" returns successfully" Jan 17 12:18:09.598524 containerd[1606]: time="2025-01-17T12:18:09.598327594Z" level=info msg="StartContainer for \"557b073486a94b3eb4dac1c4e2ac8811ed3bd3fe6ce0bc9b6df8f3e5cc52c160\" returns successfully" Jan 17 12:18:09.642708 kubelet[2387]: E0117 12:18:09.642360 2387 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.98.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.98.155:6443: connect: connection refused Jan 17 12:18:09.738551 kubelet[2387]: E0117 12:18:09.736209 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:09.763858 kubelet[2387]: E0117 12:18:09.763797 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:09.774594 kubelet[2387]: E0117 12:18:09.774254 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:10.778014 kubelet[2387]: I0117 12:18:10.776640 2387 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:10.785109 kubelet[2387]: E0117 12:18:10.785046 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:11.781935 kubelet[2387]: E0117 12:18:11.781871 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:12.605346 kubelet[2387]: I0117 12:18:12.605248 2387 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:12.621004 kubelet[2387]: I0117 12:18:12.619198 2387 apiserver.go:52] "Watching apiserver" Jan 17 12:18:12.654728 kubelet[2387]: I0117 12:18:12.654391 2387 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:18:12.719089 kubelet[2387]: E0117 12:18:12.718389 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 17 12:18:16.087769 systemd[1]: Reloading requested from client PID 2669 ('systemctl') (unit session-7.scope)... Jan 17 12:18:16.088388 systemd[1]: Reloading... Jan 17 12:18:16.252245 zram_generator::config[2711]: No configuration found. Jan 17 12:18:16.486332 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:18:16.602274 systemd[1]: Reloading finished in 513 ms. Jan 17 12:18:16.655009 kubelet[2387]: I0117 12:18:16.654891 2387 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:18:16.655745 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:16.666263 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:18:16.666745 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:16.675518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:16.900538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:16.919924 (kubelet)[2769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:18:17.044159 kubelet[2769]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:17.044159 kubelet[2769]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:18:17.044159 kubelet[2769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:17.053049 kubelet[2769]: I0117 12:18:17.051459 2769 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:18:17.068036 kubelet[2769]: I0117 12:18:17.067482 2769 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:18:17.068036 kubelet[2769]: I0117 12:18:17.067527 2769 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:18:17.068036 kubelet[2769]: I0117 12:18:17.067890 2769 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:18:17.071759 kubelet[2769]: I0117 12:18:17.071721 2769 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:18:17.087849 kubelet[2769]: I0117 12:18:17.087780 2769 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:18:17.102671 kubelet[2769]: I0117 12:18:17.102504 2769 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:18:17.103901 kubelet[2769]: I0117 12:18:17.103856 2769 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:18:17.104531 kubelet[2769]: I0117 12:18:17.104492 2769 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:18:17.105041 kubelet[2769]: I0117 12:18:17.104742 2769 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:18:17.105041 kubelet[2769]: I0117 12:18:17.104770 2769 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:18:17.105041 kubelet[2769]: I0117 12:18:17.104830 2769 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:17.106321 kubelet[2769]: I0117 12:18:17.105255 2769 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:18:17.106644 kubelet[2769]: I0117 12:18:17.106615 2769 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:18:17.106798 kubelet[2769]: I0117 12:18:17.106784 2769 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:18:17.106874 kubelet[2769]: I0117 12:18:17.106865 2769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:18:17.118440 kubelet[2769]: I0117 12:18:17.118390 2769 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:18:17.119797 kubelet[2769]: I0117 12:18:17.119084 2769 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:18:17.121133 kubelet[2769]: I0117 12:18:17.120682 2769 server.go:1256] "Started kubelet" Jan 17 12:18:17.142246 kubelet[2769]: I0117 12:18:17.142202 2769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:18:17.147419 kubelet[2769]: I0117 12:18:17.147365 2769 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:18:17.151090 kubelet[2769]: I0117 12:18:17.150900 2769 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:18:17.176341 kubelet[2769]: I0117 12:18:17.176080 2769 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:18:17.176868 kubelet[2769]: I0117 12:18:17.176762 2769 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:18:17.197325 kubelet[2769]: E0117 12:18:17.197184 2769 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:18:17.200389 kubelet[2769]: I0117 12:18:17.199690 2769 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:18:17.200729 kubelet[2769]: I0117 12:18:17.200694 2769 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:18:17.202566 kubelet[2769]: I0117 12:18:17.202500 2769 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:18:17.202566 kubelet[2769]: I0117 12:18:17.202572 2769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:18:17.207184 kubelet[2769]: I0117 12:18:17.206583 2769 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:18:17.207184 kubelet[2769]: I0117 12:18:17.206720 2769 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:18:17.207409 kubelet[2769]: I0117 12:18:17.207260 2769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:18:17.207409 kubelet[2769]: I0117 12:18:17.207287 2769 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:18:17.207409 kubelet[2769]: I0117 12:18:17.207306 2769 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:18:17.207409 kubelet[2769]: E0117 12:18:17.207373 2769 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:18:17.222131 kubelet[2769]: I0117 12:18:17.220571 2769 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:18:17.295566 kubelet[2769]: I0117 12:18:17.295525 2769 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.308266 kubelet[2769]: E0117 12:18:17.308139 2769 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:18:17.316210 kubelet[2769]: I0117 12:18:17.312334 2769 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.326052 kubelet[2769]: I0117 12:18:17.321132 2769 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.397135 kubelet[2769]: I0117 12:18:17.393945 2769 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:18:17.397135 kubelet[2769]: I0117 12:18:17.394138 2769 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:18:17.397135 kubelet[2769]: I0117 12:18:17.394172 2769 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:17.397135 kubelet[2769]: I0117 12:18:17.394420 2769 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:18:17.397135 kubelet[2769]: I0117 12:18:17.394451 2769 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:18:17.397135 kubelet[2769]: I0117 12:18:17.394463 2769 policy_none.go:49] "None policy: Start" Jan 17 12:18:17.401725 kubelet[2769]: I0117 12:18:17.401584 2769 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:18:17.401725 kubelet[2769]: I0117 12:18:17.401644 2769 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:18:17.402481 kubelet[2769]: I0117 12:18:17.402010 2769 state_mem.go:75] "Updated machine memory state" Jan 17 12:18:17.418821 kubelet[2769]: I0117 12:18:17.414095 2769 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:18:17.418821 kubelet[2769]: I0117 12:18:17.416327 2769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:18:17.508532 kubelet[2769]: I0117 12:18:17.508464 2769 topology_manager.go:215] "Topology Admit Handler" podUID="24b8e15bfd910494fec018e4d3fca98a" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.511158 kubelet[2769]: I0117 12:18:17.508902 2769 topology_manager.go:215] "Topology Admit Handler" podUID="236bc78adfce230dd162be9fe87f153b" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.511158 kubelet[2769]: I0117 12:18:17.509006 2769 topology_manager.go:215] "Topology Admit Handler" podUID="23b4f217f47daedaa2da6787146058d1" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.538195 kubelet[2769]: W0117 12:18:17.537360 2769 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:17.543367 kubelet[2769]: W0117 12:18:17.543292 2769 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:17.545022 kubelet[2769]: W0117 12:18:17.544633 2769 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:17.604915 kubelet[2769]: I0117 12:18:17.604031 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23b4f217f47daedaa2da6787146058d1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-89c7b8b189\" (UID: \"23b4f217f47daedaa2da6787146058d1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.604915 kubelet[2769]: I0117 12:18:17.604471 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23b4f217f47daedaa2da6787146058d1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-89c7b8b189\" (UID: \"23b4f217f47daedaa2da6787146058d1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.604915 kubelet[2769]: I0117 12:18:17.604586 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23b4f217f47daedaa2da6787146058d1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-89c7b8b189\" (UID: \"23b4f217f47daedaa2da6787146058d1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.604915 kubelet[2769]: I0117 12:18:17.604648 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/236bc78adfce230dd162be9fe87f153b-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-89c7b8b189\" (UID: \"236bc78adfce230dd162be9fe87f153b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.604915 kubelet[2769]: I0117 12:18:17.604692 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/236bc78adfce230dd162be9fe87f153b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-89c7b8b189\" (UID: \"236bc78adfce230dd162be9fe87f153b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.605318 kubelet[2769]: I0117 12:18:17.604731 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23b4f217f47daedaa2da6787146058d1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-89c7b8b189\" (UID: \"23b4f217f47daedaa2da6787146058d1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.605318 kubelet[2769]: I0117 12:18:17.604767 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23b4f217f47daedaa2da6787146058d1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-89c7b8b189\" (UID: \"23b4f217f47daedaa2da6787146058d1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.605747 kubelet[2769]: I0117 12:18:17.604811 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24b8e15bfd910494fec018e4d3fca98a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-89c7b8b189\" (UID: \"24b8e15bfd910494fec018e4d3fca98a\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.605747 kubelet[2769]: I0117 12:18:17.605690 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/236bc78adfce230dd162be9fe87f153b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-89c7b8b189\" (UID: \"236bc78adfce230dd162be9fe87f153b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-89c7b8b189" Jan 17 12:18:17.844507 kubelet[2769]: E0117 12:18:17.843240 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:17.846842 kubelet[2769]: E0117 12:18:17.846787 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:17.847254 kubelet[2769]: E0117 12:18:17.847126 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:18.111165 kubelet[2769]: I0117 12:18:18.111001 2769 apiserver.go:52] "Watching apiserver" Jan 17 12:18:18.200625 kubelet[2769]: I0117 12:18:18.200536 2769 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:18:18.255105 kubelet[2769]: E0117 12:18:18.254884 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:18.257065 kubelet[2769]: E0117 12:18:18.256285 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:18.261534 kubelet[2769]: E0117 12:18:18.261434 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:18.328732 kubelet[2769]: I0117 12:18:18.325425 2769 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-89c7b8b189" podStartSLOduration=1.325338016 podStartE2EDuration="1.325338016s" podCreationTimestamp="2025-01-17 12:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:18.306348445 +0000 UTC m=+1.370819745" watchObservedRunningTime="2025-01-17 12:18:18.325338016 +0000 UTC m=+1.389809309" Jan 17 12:18:18.366295 kubelet[2769]: I0117 12:18:18.363954 2769 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-89c7b8b189" podStartSLOduration=1.363910609 podStartE2EDuration="1.363910609s" podCreationTimestamp="2025-01-17 12:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:18.363670351 +0000 UTC m=+1.428141644" watchObservedRunningTime="2025-01-17 12:18:18.363910609 +0000 UTC m=+1.428381887" Jan 17 12:18:18.366295 kubelet[2769]: I0117 12:18:18.364636 2769 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-89c7b8b189" podStartSLOduration=1.364585395 podStartE2EDuration="1.364585395s" podCreationTimestamp="2025-01-17 12:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:18.329366211 +0000 UTC m=+1.393837497" watchObservedRunningTime="2025-01-17 12:18:18.364585395 +0000 UTC m=+1.429056691" Jan 17 12:18:18.934570 sudo[1794]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:18.942269 sshd[1787]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:18.951846 systemd[1]: sshd@6-143.198.98.155:22-139.178.68.195:44486.service: Deactivated successfully. Jan 17 12:18:18.956781 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:18:18.959188 systemd-logind[1585]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:18:18.961309 systemd-logind[1585]: Removed session 7. Jan 17 12:18:19.257431 kubelet[2769]: E0117 12:18:19.257390 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:19.259350 kubelet[2769]: E0117 12:18:19.258869 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:19.476082 kubelet[2769]: E0117 12:18:19.473305 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:20.260271 kubelet[2769]: E0117 12:18:20.260221 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:20.740338 kubelet[2769]: E0117 12:18:20.740250 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:21.274374 kubelet[2769]: E0117 12:18:21.272322 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:22.271897 kubelet[2769]: E0117 12:18:22.270481 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:28.211745 kubelet[2769]: E0117 12:18:28.209267 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:28.403726 kubelet[2769]: I0117 12:18:28.403684 2769 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:18:28.404250 containerd[1606]: time="2025-01-17T12:18:28.404195347Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:18:28.405758 kubelet[2769]: I0117 12:18:28.404808 2769 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:18:29.282488 kubelet[2769]: I0117 12:18:29.282415 2769 topology_manager.go:215] "Topology Admit Handler" podUID="3c3ad0e0-17f4-4583-8637-9ba2cd295b97" podNamespace="kube-system" podName="kube-proxy-b7nxz" Jan 17 12:18:29.316027 kubelet[2769]: I0117 12:18:29.313095 2769 topology_manager.go:215] "Topology Admit Handler" podUID="8c366ed9-8196-4c81-8b66-cf043bec401f" podNamespace="kube-flannel" podName="kube-flannel-ds-9vgmw" Jan 17 12:18:29.443008 kubelet[2769]: I0117 12:18:29.441834 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/8c366ed9-8196-4c81-8b66-cf043bec401f-flannel-cfg\") pod \"kube-flannel-ds-9vgmw\" (UID: \"8c366ed9-8196-4c81-8b66-cf043bec401f\") " pod="kube-flannel/kube-flannel-ds-9vgmw" Jan 17 12:18:29.443008 kubelet[2769]: I0117 12:18:29.441916 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c3ad0e0-17f4-4583-8637-9ba2cd295b97-kube-proxy\") pod \"kube-proxy-b7nxz\" (UID: \"3c3ad0e0-17f4-4583-8637-9ba2cd295b97\") " pod="kube-system/kube-proxy-b7nxz" Jan 17 12:18:29.443008 kubelet[2769]: I0117 12:18:29.441962 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c3ad0e0-17f4-4583-8637-9ba2cd295b97-lib-modules\") pod \"kube-proxy-b7nxz\" (UID: \"3c3ad0e0-17f4-4583-8637-9ba2cd295b97\") " pod="kube-system/kube-proxy-b7nxz" Jan 17 12:18:29.443008 kubelet[2769]: I0117 12:18:29.442021 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8j5c\" (UniqueName: \"kubernetes.io/projected/3c3ad0e0-17f4-4583-8637-9ba2cd295b97-kube-api-access-t8j5c\") pod \"kube-proxy-b7nxz\" (UID: \"3c3ad0e0-17f4-4583-8637-9ba2cd295b97\") " pod="kube-system/kube-proxy-b7nxz" Jan 17 12:18:29.443008 kubelet[2769]: I0117 12:18:29.442069 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/8c366ed9-8196-4c81-8b66-cf043bec401f-cni\") pod \"kube-flannel-ds-9vgmw\" (UID: \"8c366ed9-8196-4c81-8b66-cf043bec401f\") " pod="kube-flannel/kube-flannel-ds-9vgmw" Jan 17 12:18:29.443382 kubelet[2769]: I0117 12:18:29.442111 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x44b\" (UniqueName: \"kubernetes.io/projected/8c366ed9-8196-4c81-8b66-cf043bec401f-kube-api-access-5x44b\") pod \"kube-flannel-ds-9vgmw\" (UID: \"8c366ed9-8196-4c81-8b66-cf043bec401f\") " pod="kube-flannel/kube-flannel-ds-9vgmw" Jan 17 12:18:29.443382 kubelet[2769]: I0117 12:18:29.442151 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c3ad0e0-17f4-4583-8637-9ba2cd295b97-xtables-lock\") pod \"kube-proxy-b7nxz\" (UID: \"3c3ad0e0-17f4-4583-8637-9ba2cd295b97\") " pod="kube-system/kube-proxy-b7nxz" Jan 17 12:18:29.443382 kubelet[2769]: I0117 12:18:29.442184 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8c366ed9-8196-4c81-8b66-cf043bec401f-run\") pod \"kube-flannel-ds-9vgmw\" (UID: \"8c366ed9-8196-4c81-8b66-cf043bec401f\") " pod="kube-flannel/kube-flannel-ds-9vgmw" Jan 17 12:18:29.443382 kubelet[2769]: I0117 12:18:29.442221 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/8c366ed9-8196-4c81-8b66-cf043bec401f-cni-plugin\") pod \"kube-flannel-ds-9vgmw\" (UID: \"8c366ed9-8196-4c81-8b66-cf043bec401f\") " pod="kube-flannel/kube-flannel-ds-9vgmw" Jan 17 12:18:29.443382 kubelet[2769]: I0117 12:18:29.442257 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c366ed9-8196-4c81-8b66-cf043bec401f-xtables-lock\") pod \"kube-flannel-ds-9vgmw\" (UID: \"8c366ed9-8196-4c81-8b66-cf043bec401f\") " pod="kube-flannel/kube-flannel-ds-9vgmw" Jan 17 12:18:29.623777 kubelet[2769]: E0117 12:18:29.622295 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:29.629309 containerd[1606]: time="2025-01-17T12:18:29.629179020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9vgmw,Uid:8c366ed9-8196-4c81-8b66-cf043bec401f,Namespace:kube-flannel,Attempt:0,}" Jan 17 12:18:29.701070 containerd[1606]: time="2025-01-17T12:18:29.700474565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:29.701070 containerd[1606]: time="2025-01-17T12:18:29.700592859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:29.701070 containerd[1606]: time="2025-01-17T12:18:29.700616078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:29.701070 containerd[1606]: time="2025-01-17T12:18:29.700770224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:29.820662 containerd[1606]: time="2025-01-17T12:18:29.820578148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9vgmw,Uid:8c366ed9-8196-4c81-8b66-cf043bec401f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"de4ccbfc5ecc63943838ef710ad950453aa2a934d2ee3ff5c85ca29fd683ccbc\"" Jan 17 12:18:29.823935 kubelet[2769]: E0117 12:18:29.822744 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:29.834091 containerd[1606]: time="2025-01-17T12:18:29.832746140Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 17 12:18:29.899585 kubelet[2769]: E0117 12:18:29.897847 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:29.899772 containerd[1606]: time="2025-01-17T12:18:29.899130326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b7nxz,Uid:3c3ad0e0-17f4-4583-8637-9ba2cd295b97,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:29.958252 containerd[1606]: time="2025-01-17T12:18:29.957946463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:29.958252 containerd[1606]: time="2025-01-17T12:18:29.958113699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:29.958252 containerd[1606]: time="2025-01-17T12:18:29.958139383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:29.958914 containerd[1606]: time="2025-01-17T12:18:29.958346950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:30.053317 containerd[1606]: time="2025-01-17T12:18:30.053174204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b7nxz,Uid:3c3ad0e0-17f4-4583-8637-9ba2cd295b97,Namespace:kube-system,Attempt:0,} returns sandbox id \"a60f43f9ca21b07b8bbe3bcff1f4d1d497723e00ba752c7208427d3387865749\"" Jan 17 12:18:30.056606 kubelet[2769]: E0117 12:18:30.056284 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:30.062284 containerd[1606]: time="2025-01-17T12:18:30.061655560Z" level=info msg="CreateContainer within sandbox \"a60f43f9ca21b07b8bbe3bcff1f4d1d497723e00ba752c7208427d3387865749\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:18:30.094629 containerd[1606]: time="2025-01-17T12:18:30.094533698Z" level=info msg="CreateContainer within sandbox \"a60f43f9ca21b07b8bbe3bcff1f4d1d497723e00ba752c7208427d3387865749\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e25ea54e1d18f68caeb99a9551b118aaeeeca1f57819917f65ff4e3fe32490ca\"" Jan 17 12:18:30.106491 containerd[1606]: time="2025-01-17T12:18:30.105384243Z" level=info msg="StartContainer for \"e25ea54e1d18f68caeb99a9551b118aaeeeca1f57819917f65ff4e3fe32490ca\"" Jan 17 12:18:30.247660 containerd[1606]: time="2025-01-17T12:18:30.247595710Z" level=info msg="StartContainer for \"e25ea54e1d18f68caeb99a9551b118aaeeeca1f57819917f65ff4e3fe32490ca\" returns successfully" Jan 17 12:18:30.361472 kubelet[2769]: E0117 12:18:30.360194 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:31.977016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2679478073.mount: Deactivated successfully. Jan 17 12:18:32.083798 containerd[1606]: time="2025-01-17T12:18:32.082936407Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:32.084847 containerd[1606]: time="2025-01-17T12:18:32.084784962Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 17 12:18:32.085844 containerd[1606]: time="2025-01-17T12:18:32.085791764Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:32.092081 containerd[1606]: time="2025-01-17T12:18:32.092025545Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:32.093165 containerd[1606]: time="2025-01-17T12:18:32.093124651Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.260286403s" Jan 17 12:18:32.093296 containerd[1606]: time="2025-01-17T12:18:32.093282428Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 17 12:18:32.099453 containerd[1606]: time="2025-01-17T12:18:32.099384338Z" level=info msg="CreateContainer within sandbox \"de4ccbfc5ecc63943838ef710ad950453aa2a934d2ee3ff5c85ca29fd683ccbc\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 17 12:18:32.131166 containerd[1606]: time="2025-01-17T12:18:32.130795844Z" level=info msg="CreateContainer within sandbox \"de4ccbfc5ecc63943838ef710ad950453aa2a934d2ee3ff5c85ca29fd683ccbc\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"9eeceed6b28925b734b72ecf05c601e1e48dff4ea1e84da2f66762e7cfa3e830\"" Jan 17 12:18:32.133360 containerd[1606]: time="2025-01-17T12:18:32.133176346Z" level=info msg="StartContainer for \"9eeceed6b28925b734b72ecf05c601e1e48dff4ea1e84da2f66762e7cfa3e830\"" Jan 17 12:18:32.235942 containerd[1606]: time="2025-01-17T12:18:32.229759207Z" level=info msg="StartContainer for \"9eeceed6b28925b734b72ecf05c601e1e48dff4ea1e84da2f66762e7cfa3e830\" returns successfully" Jan 17 12:18:32.277191 containerd[1606]: time="2025-01-17T12:18:32.276773302Z" level=info msg="shim disconnected" id=9eeceed6b28925b734b72ecf05c601e1e48dff4ea1e84da2f66762e7cfa3e830 namespace=k8s.io Jan 17 12:18:32.277191 containerd[1606]: time="2025-01-17T12:18:32.276881777Z" level=warning msg="cleaning up after shim disconnected" id=9eeceed6b28925b734b72ecf05c601e1e48dff4ea1e84da2f66762e7cfa3e830 namespace=k8s.io Jan 17 12:18:32.277191 containerd[1606]: time="2025-01-17T12:18:32.276899242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:32.377915 kubelet[2769]: E0117 12:18:32.377336 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:32.381852 containerd[1606]: time="2025-01-17T12:18:32.380332468Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 17 12:18:32.418962 kubelet[2769]: I0117 12:18:32.418904 2769 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-b7nxz" podStartSLOduration=3.418850416 podStartE2EDuration="3.418850416s" podCreationTimestamp="2025-01-17 12:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:30.400484977 +0000 UTC m=+13.464956282" watchObservedRunningTime="2025-01-17 12:18:32.418850416 +0000 UTC m=+15.483321707" Jan 17 12:18:32.797198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eeceed6b28925b734b72ecf05c601e1e48dff4ea1e84da2f66762e7cfa3e830-rootfs.mount: Deactivated successfully. Jan 17 12:18:34.606146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201200240.mount: Deactivated successfully. Jan 17 12:18:36.033705 containerd[1606]: time="2025-01-17T12:18:36.033595723Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:36.035469 containerd[1606]: time="2025-01-17T12:18:36.035033406Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 17 12:18:36.036620 containerd[1606]: time="2025-01-17T12:18:36.036538932Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:36.044388 containerd[1606]: time="2025-01-17T12:18:36.044293575Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:36.049882 containerd[1606]: time="2025-01-17T12:18:36.049468401Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.669069534s" Jan 17 12:18:36.049882 containerd[1606]: time="2025-01-17T12:18:36.049561796Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 17 12:18:36.068592 containerd[1606]: time="2025-01-17T12:18:36.067738960Z" level=info msg="CreateContainer within sandbox \"de4ccbfc5ecc63943838ef710ad950453aa2a934d2ee3ff5c85ca29fd683ccbc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:18:36.097323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2358959193.mount: Deactivated successfully. Jan 17 12:18:36.104198 containerd[1606]: time="2025-01-17T12:18:36.104123288Z" level=info msg="CreateContainer within sandbox \"de4ccbfc5ecc63943838ef710ad950453aa2a934d2ee3ff5c85ca29fd683ccbc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6e8b808c59e07a87820d575bc3edd793e793c754a71df753d5cd11e900d9445a\"" Jan 17 12:18:36.111536 containerd[1606]: time="2025-01-17T12:18:36.111473030Z" level=info msg="StartContainer for \"6e8b808c59e07a87820d575bc3edd793e793c754a71df753d5cd11e900d9445a\"" Jan 17 12:18:36.223841 containerd[1606]: time="2025-01-17T12:18:36.223754674Z" level=info msg="StartContainer for \"6e8b808c59e07a87820d575bc3edd793e793c754a71df753d5cd11e900d9445a\" returns successfully" Jan 17 12:18:36.236146 kubelet[2769]: I0117 12:18:36.236059 2769 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:18:36.289424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e8b808c59e07a87820d575bc3edd793e793c754a71df753d5cd11e900d9445a-rootfs.mount: Deactivated successfully. Jan 17 12:18:36.325868 kubelet[2769]: I0117 12:18:36.325794 2769 topology_manager.go:215] "Topology Admit Handler" podUID="b571e7c1-b831-45b4-b31b-5a2113261324" podNamespace="kube-system" podName="coredns-76f75df574-2k65x" Jan 17 12:18:36.335131 kubelet[2769]: I0117 12:18:36.326066 2769 topology_manager.go:215] "Topology Admit Handler" podUID="37f6c8ff-cf5f-439f-a8e0-3c57e8678ec5" podNamespace="kube-system" podName="coredns-76f75df574-d5w45" Jan 17 12:18:36.352766 containerd[1606]: time="2025-01-17T12:18:36.352631117Z" level=info msg="shim disconnected" id=6e8b808c59e07a87820d575bc3edd793e793c754a71df753d5cd11e900d9445a namespace=k8s.io Jan 17 12:18:36.353287 containerd[1606]: time="2025-01-17T12:18:36.353231950Z" level=warning msg="cleaning up after shim disconnected" id=6e8b808c59e07a87820d575bc3edd793e793c754a71df753d5cd11e900d9445a namespace=k8s.io Jan 17 12:18:36.353512 containerd[1606]: time="2025-01-17T12:18:36.353485786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:36.418690 kubelet[2769]: I0117 12:18:36.418619 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b571e7c1-b831-45b4-b31b-5a2113261324-config-volume\") pod \"coredns-76f75df574-2k65x\" (UID: \"b571e7c1-b831-45b4-b31b-5a2113261324\") " pod="kube-system/coredns-76f75df574-2k65x" Jan 17 12:18:36.418914 kubelet[2769]: I0117 12:18:36.418736 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37f6c8ff-cf5f-439f-a8e0-3c57e8678ec5-config-volume\") pod \"coredns-76f75df574-d5w45\" (UID: \"37f6c8ff-cf5f-439f-a8e0-3c57e8678ec5\") " pod="kube-system/coredns-76f75df574-d5w45" Jan 17 12:18:36.418914 kubelet[2769]: I0117 12:18:36.418857 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzffp\" (UniqueName: \"kubernetes.io/projected/b571e7c1-b831-45b4-b31b-5a2113261324-kube-api-access-vzffp\") pod \"coredns-76f75df574-2k65x\" (UID: \"b571e7c1-b831-45b4-b31b-5a2113261324\") " pod="kube-system/coredns-76f75df574-2k65x" Jan 17 12:18:36.419039 kubelet[2769]: I0117 12:18:36.419024 2769 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nr6k\" (UniqueName: \"kubernetes.io/projected/37f6c8ff-cf5f-439f-a8e0-3c57e8678ec5-kube-api-access-4nr6k\") pod \"coredns-76f75df574-d5w45\" (UID: \"37f6c8ff-cf5f-439f-a8e0-3c57e8678ec5\") " pod="kube-system/coredns-76f75df574-d5w45" Jan 17 12:18:36.423923 kubelet[2769]: E0117 12:18:36.423853 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:36.432749 containerd[1606]: time="2025-01-17T12:18:36.432410778Z" level=info msg="CreateContainer within sandbox \"de4ccbfc5ecc63943838ef710ad950453aa2a934d2ee3ff5c85ca29fd683ccbc\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 17 12:18:36.467898 containerd[1606]: time="2025-01-17T12:18:36.467710362Z" level=info msg="CreateContainer within sandbox \"de4ccbfc5ecc63943838ef710ad950453aa2a934d2ee3ff5c85ca29fd683ccbc\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"91f784e08fb7d8b5bf21b4a131befcfde6fa1631ee969ff9dba1a0c2ad988c73\"" Jan 17 12:18:36.469052 containerd[1606]: time="2025-01-17T12:18:36.468887531Z" level=info msg="StartContainer for \"91f784e08fb7d8b5bf21b4a131befcfde6fa1631ee969ff9dba1a0c2ad988c73\"" Jan 17 12:18:36.598986 containerd[1606]: time="2025-01-17T12:18:36.598590865Z" level=info msg="StartContainer for \"91f784e08fb7d8b5bf21b4a131befcfde6fa1631ee969ff9dba1a0c2ad988c73\" returns successfully" Jan 17 12:18:36.643025 kubelet[2769]: E0117 12:18:36.642636 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:36.644669 containerd[1606]: time="2025-01-17T12:18:36.643891708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2k65x,Uid:b571e7c1-b831-45b4-b31b-5a2113261324,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:36.649938 kubelet[2769]: E0117 12:18:36.649884 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:36.652048 containerd[1606]: time="2025-01-17T12:18:36.651893702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d5w45,Uid:37f6c8ff-cf5f-439f-a8e0-3c57e8678ec5,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:36.698475 containerd[1606]: time="2025-01-17T12:18:36.698264273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2k65x,Uid:b571e7c1-b831-45b4-b31b-5a2113261324,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"980eea7ec3772a98aa2354435d8fbf382ec69530abde9f249d53c80b4435018f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 12:18:36.699189 kubelet[2769]: E0117 12:18:36.698861 2769 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"980eea7ec3772a98aa2354435d8fbf382ec69530abde9f249d53c80b4435018f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 12:18:36.699189 kubelet[2769]: E0117 12:18:36.698929 2769 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"980eea7ec3772a98aa2354435d8fbf382ec69530abde9f249d53c80b4435018f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-2k65x" Jan 17 12:18:36.699870 kubelet[2769]: E0117 12:18:36.699404 2769 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"980eea7ec3772a98aa2354435d8fbf382ec69530abde9f249d53c80b4435018f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-2k65x" Jan 17 12:18:36.699870 kubelet[2769]: E0117 12:18:36.699520 2769 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-2k65x_kube-system(b571e7c1-b831-45b4-b31b-5a2113261324)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-2k65x_kube-system(b571e7c1-b831-45b4-b31b-5a2113261324)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"980eea7ec3772a98aa2354435d8fbf382ec69530abde9f249d53c80b4435018f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-2k65x" podUID="b571e7c1-b831-45b4-b31b-5a2113261324" Jan 17 12:18:36.706954 containerd[1606]: time="2025-01-17T12:18:36.706740503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d5w45,Uid:37f6c8ff-cf5f-439f-a8e0-3c57e8678ec5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0b0b1931d7d0e25d0049b6c9e21ee616b6f89616ffe83fc628b89575e74ac6b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 12:18:36.707291 kubelet[2769]: E0117 12:18:36.707181 2769 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b0b1931d7d0e25d0049b6c9e21ee616b6f89616ffe83fc628b89575e74ac6b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 12:18:36.707291 kubelet[2769]: E0117 12:18:36.707258 2769 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b0b1931d7d0e25d0049b6c9e21ee616b6f89616ffe83fc628b89575e74ac6b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-d5w45" Jan 17 12:18:36.707291 kubelet[2769]: E0117 12:18:36.707288 2769 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b0b1931d7d0e25d0049b6c9e21ee616b6f89616ffe83fc628b89575e74ac6b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-d5w45" Jan 17 12:18:36.707452 kubelet[2769]: E0117 12:18:36.707374 2769 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-d5w45_kube-system(37f6c8ff-cf5f-439f-a8e0-3c57e8678ec5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-d5w45_kube-system(37f6c8ff-cf5f-439f-a8e0-3c57e8678ec5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0b0b1931d7d0e25d0049b6c9e21ee616b6f89616ffe83fc628b89575e74ac6b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-d5w45" podUID="37f6c8ff-cf5f-439f-a8e0-3c57e8678ec5" Jan 17 12:18:37.434142 kubelet[2769]: E0117 12:18:37.431843 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:37.466549 kubelet[2769]: I0117 12:18:37.466486 2769 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-9vgmw" podStartSLOduration=2.237966342 podStartE2EDuration="8.466430432s" podCreationTimestamp="2025-01-17 12:18:29 +0000 UTC" firstStartedPulling="2025-01-17 12:18:29.828901095 +0000 UTC m=+12.893372380" lastFinishedPulling="2025-01-17 12:18:36.057365183 +0000 UTC m=+19.121836470" observedRunningTime="2025-01-17 12:18:37.46291816 +0000 UTC m=+20.527389456" watchObservedRunningTime="2025-01-17 12:18:37.466430432 +0000 UTC m=+20.530901719" Jan 17 12:18:37.692303 systemd-networkd[1231]: flannel.1: Link UP Jan 17 12:18:37.692316 systemd-networkd[1231]: flannel.1: Gained carrier Jan 17 12:18:38.444126 kubelet[2769]: E0117 12:18:38.444066 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:39.669533 systemd-networkd[1231]: flannel.1: Gained IPv6LL Jan 17 12:18:51.219306 kubelet[2769]: E0117 12:18:51.215379 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:51.220242 containerd[1606]: time="2025-01-17T12:18:51.216468793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2k65x,Uid:b571e7c1-b831-45b4-b31b-5a2113261324,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:51.221290 kubelet[2769]: E0117 12:18:51.220166 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:51.226020 containerd[1606]: time="2025-01-17T12:18:51.224432244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d5w45,Uid:37f6c8ff-cf5f-439f-a8e0-3c57e8678ec5,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:51.341649 systemd-networkd[1231]: cni0: Link UP Jan 17 12:18:51.343035 systemd-networkd[1231]: cni0: Gained carrier Jan 17 12:18:51.362342 systemd-networkd[1231]: vethb9512013: Link UP Jan 17 12:18:51.367074 kernel: cni0: port 1(vethb9512013) entered blocking state Jan 17 12:18:51.367212 kernel: cni0: port 1(vethb9512013) entered disabled state Jan 17 12:18:51.365570 systemd-networkd[1231]: cni0: Lost carrier Jan 17 12:18:51.370787 kernel: vethb9512013: entered allmulticast mode Jan 17 12:18:51.373647 kernel: vethb9512013: entered promiscuous mode Jan 17 12:18:51.377188 systemd-networkd[1231]: veth9bcc6111: Link UP Jan 17 12:18:51.382811 kernel: cni0: port 2(veth9bcc6111) entered blocking state Jan 17 12:18:51.382998 kernel: cni0: port 2(veth9bcc6111) entered disabled state Jan 17 12:18:51.386903 kernel: veth9bcc6111: entered allmulticast mode Jan 17 12:18:51.389742 kernel: veth9bcc6111: entered promiscuous mode Jan 17 12:18:51.396226 kernel: cni0: port 2(veth9bcc6111) entered blocking state Jan 17 12:18:51.396371 kernel: cni0: port 2(veth9bcc6111) entered forwarding state Jan 17 12:18:51.394049 systemd-networkd[1231]: cni0: Gained carrier Jan 17 12:18:51.406050 kernel: cni0: port 2(veth9bcc6111) entered disabled state Jan 17 12:18:51.409291 kernel: cni0: port 1(vethb9512013) entered blocking state Jan 17 12:18:51.410880 kernel: cni0: port 1(vethb9512013) entered forwarding state Jan 17 12:18:51.409402 systemd-networkd[1231]: vethb9512013: Gained carrier Jan 17 12:18:51.443465 kernel: cni0: port 2(veth9bcc6111) entered blocking state Jan 17 12:18:51.444096 kernel: cni0: port 2(veth9bcc6111) entered forwarding state Jan 17 12:18:51.443035 systemd-networkd[1231]: veth9bcc6111: Gained carrier Jan 17 12:18:51.444239 containerd[1606]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c938), "name":"cbr0", "type":"bridge"} Jan 17 12:18:51.444239 containerd[1606]: delegateAdd: netconf sent to delegate plugin: Jan 17 12:18:51.468082 containerd[1606]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Jan 17 12:18:51.468082 containerd[1606]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 17 12:18:51.468082 containerd[1606]: delegateAdd: netconf sent to delegate plugin: Jan 17 12:18:51.510218 containerd[1606]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-17T12:18:51.508032333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:51.510218 containerd[1606]: time="2025-01-17T12:18:51.508835408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:51.510218 containerd[1606]: time="2025-01-17T12:18:51.508890601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:51.510218 containerd[1606]: time="2025-01-17T12:18:51.509312026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:51.542286 containerd[1606]: time="2025-01-17T12:18:51.541958420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:51.542286 containerd[1606]: time="2025-01-17T12:18:51.542123758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:51.542286 containerd[1606]: time="2025-01-17T12:18:51.542153912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:51.543997 containerd[1606]: time="2025-01-17T12:18:51.543841105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:51.655491 containerd[1606]: time="2025-01-17T12:18:51.655380526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d5w45,Uid:37f6c8ff-cf5f-439f-a8e0-3c57e8678ec5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5734de2357961c5e1e7061476aa2d5f2f6f1072d1ee9d09d0f5ec172cd469328\"" Jan 17 12:18:51.685783 kubelet[2769]: E0117 12:18:51.685716 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:51.742878 containerd[1606]: time="2025-01-17T12:18:51.741756268Z" level=info msg="CreateContainer within sandbox \"5734de2357961c5e1e7061476aa2d5f2f6f1072d1ee9d09d0f5ec172cd469328\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:18:51.756285 containerd[1606]: time="2025-01-17T12:18:51.755756285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2k65x,Uid:b571e7c1-b831-45b4-b31b-5a2113261324,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aa21e295b2d305493eb76f1c180ca0c640f99f61ed6f7dba75e14c7d9cf8171\"" Jan 17 12:18:51.757894 kubelet[2769]: E0117 12:18:51.757323 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:51.764424 containerd[1606]: time="2025-01-17T12:18:51.764285801Z" level=info msg="CreateContainer within sandbox \"5734de2357961c5e1e7061476aa2d5f2f6f1072d1ee9d09d0f5ec172cd469328\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"149b770991cf99ab894f28b006ba16991a4bc4ed89f8ea6531dbff0e5f16e665\"" Jan 17 12:18:51.769095 containerd[1606]: time="2025-01-17T12:18:51.768441266Z" level=info msg="StartContainer for \"149b770991cf99ab894f28b006ba16991a4bc4ed89f8ea6531dbff0e5f16e665\"" Jan 17 12:18:51.772282 containerd[1606]: time="2025-01-17T12:18:51.772136405Z" level=info msg="CreateContainer within sandbox \"6aa21e295b2d305493eb76f1c180ca0c640f99f61ed6f7dba75e14c7d9cf8171\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:18:51.799481 containerd[1606]: time="2025-01-17T12:18:51.799392218Z" level=info msg="CreateContainer within sandbox \"6aa21e295b2d305493eb76f1c180ca0c640f99f61ed6f7dba75e14c7d9cf8171\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38b9e4267c03bee0ec62c746ab77bd05f63ec193122f5560fdb4e77d20be23b2\"" Jan 17 12:18:51.803427 containerd[1606]: time="2025-01-17T12:18:51.803361968Z" level=info msg="StartContainer for \"38b9e4267c03bee0ec62c746ab77bd05f63ec193122f5560fdb4e77d20be23b2\"" Jan 17 12:18:51.915710 containerd[1606]: time="2025-01-17T12:18:51.915639267Z" level=info msg="StartContainer for \"149b770991cf99ab894f28b006ba16991a4bc4ed89f8ea6531dbff0e5f16e665\" returns successfully" Jan 17 12:18:51.961800 containerd[1606]: time="2025-01-17T12:18:51.961696067Z" level=info msg="StartContainer for \"38b9e4267c03bee0ec62c746ab77bd05f63ec193122f5560fdb4e77d20be23b2\" returns successfully" Jan 17 12:18:52.517507 kubelet[2769]: E0117 12:18:52.517396 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:52.531426 kubelet[2769]: E0117 12:18:52.529911 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:52.594396 kubelet[2769]: I0117 12:18:52.594320 2769 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2k65x" podStartSLOduration=23.593322863 podStartE2EDuration="23.593322863s" podCreationTimestamp="2025-01-17 12:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:52.592262652 +0000 UTC m=+35.656733945" watchObservedRunningTime="2025-01-17 12:18:52.593322863 +0000 UTC m=+35.657794149" Jan 17 12:18:52.594822 kubelet[2769]: I0117 12:18:52.594522 2769 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-d5w45" podStartSLOduration=23.594469377 podStartE2EDuration="23.594469377s" podCreationTimestamp="2025-01-17 12:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:52.561902467 +0000 UTC m=+35.626373755" watchObservedRunningTime="2025-01-17 12:18:52.594469377 +0000 UTC m=+35.658940667" Jan 17 12:18:52.725443 systemd-networkd[1231]: vethb9512013: Gained IPv6LL Jan 17 12:18:52.853394 systemd-networkd[1231]: cni0: Gained IPv6LL Jan 17 12:18:53.429737 systemd-networkd[1231]: veth9bcc6111: Gained IPv6LL Jan 17 12:18:53.534242 kubelet[2769]: E0117 12:18:53.534199 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:53.536603 kubelet[2769]: E0117 12:18:53.536415 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:54.106180 systemd[1]: Started sshd@7-143.198.98.155:22-139.178.68.195:48232.service - OpenSSH per-connection server daemon (139.178.68.195:48232). Jan 17 12:18:54.199271 sshd[3670]: Accepted publickey for core from 139.178.68.195 port 48232 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:54.207514 sshd[3670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:54.219028 systemd-logind[1585]: New session 8 of user core. Jan 17 12:18:54.224870 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:18:54.482240 sshd[3670]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:54.488652 systemd[1]: sshd@7-143.198.98.155:22-139.178.68.195:48232.service: Deactivated successfully. Jan 17 12:18:54.496563 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:18:54.497419 systemd-logind[1585]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:18:54.499254 systemd-logind[1585]: Removed session 8. Jan 17 12:18:54.537301 kubelet[2769]: E0117 12:18:54.536956 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:54.537301 kubelet[2769]: E0117 12:18:54.537192 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:59.499803 systemd[1]: Started sshd@8-143.198.98.155:22-139.178.68.195:39862.service - OpenSSH per-connection server daemon (139.178.68.195:39862). Jan 17 12:18:59.574057 sshd[3706]: Accepted publickey for core from 139.178.68.195 port 39862 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:59.578587 sshd[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:59.590821 systemd-logind[1585]: New session 9 of user core. Jan 17 12:18:59.596733 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:18:59.814273 sshd[3706]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:59.828612 systemd[1]: sshd@8-143.198.98.155:22-139.178.68.195:39862.service: Deactivated successfully. Jan 17 12:18:59.834172 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:18:59.834954 systemd-logind[1585]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:18:59.838733 systemd-logind[1585]: Removed session 9. Jan 17 12:19:04.827655 systemd[1]: Started sshd@9-143.198.98.155:22-139.178.68.195:45026.service - OpenSSH per-connection server daemon (139.178.68.195:45026). Jan 17 12:19:04.898929 sshd[3744]: Accepted publickey for core from 139.178.68.195 port 45026 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:04.908445 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:04.918360 systemd-logind[1585]: New session 10 of user core. Jan 17 12:19:04.925010 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:19:05.135403 sshd[3744]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:05.144245 systemd[1]: sshd@9-143.198.98.155:22-139.178.68.195:45026.service: Deactivated successfully. Jan 17 12:19:05.151489 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:19:05.154807 systemd-logind[1585]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:19:05.162561 systemd[1]: Started sshd@10-143.198.98.155:22-139.178.68.195:45034.service - OpenSSH per-connection server daemon (139.178.68.195:45034). Jan 17 12:19:05.165383 systemd-logind[1585]: Removed session 10. Jan 17 12:19:05.230012 sshd[3758]: Accepted publickey for core from 139.178.68.195 port 45034 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:05.233127 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:05.243606 systemd-logind[1585]: New session 11 of user core. Jan 17 12:19:05.247331 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:19:05.580012 sshd[3758]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:05.624742 systemd[1]: Started sshd@11-143.198.98.155:22-139.178.68.195:45040.service - OpenSSH per-connection server daemon (139.178.68.195:45040). Jan 17 12:19:05.628749 systemd[1]: sshd@10-143.198.98.155:22-139.178.68.195:45034.service: Deactivated successfully. Jan 17 12:19:05.654199 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:19:05.662609 systemd-logind[1585]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:19:05.679238 systemd-logind[1585]: Removed session 11. Jan 17 12:19:05.729009 sshd[3767]: Accepted publickey for core from 139.178.68.195 port 45040 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:05.731456 sshd[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:05.742580 systemd-logind[1585]: New session 12 of user core. Jan 17 12:19:05.750439 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:19:05.958376 sshd[3767]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:05.962823 systemd[1]: sshd@11-143.198.98.155:22-139.178.68.195:45040.service: Deactivated successfully. Jan 17 12:19:05.972174 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:19:05.976822 systemd-logind[1585]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:19:05.985820 systemd-logind[1585]: Removed session 12. Jan 17 12:19:10.982584 systemd[1]: Started sshd@12-143.198.98.155:22-139.178.68.195:45046.service - OpenSSH per-connection server daemon (139.178.68.195:45046). Jan 17 12:19:11.103940 sshd[3807]: Accepted publickey for core from 139.178.68.195 port 45046 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:11.106414 sshd[3807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:11.116198 systemd-logind[1585]: New session 13 of user core. Jan 17 12:19:11.121591 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:19:11.400376 sshd[3807]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:11.409416 systemd[1]: sshd@12-143.198.98.155:22-139.178.68.195:45046.service: Deactivated successfully. Jan 17 12:19:11.417482 systemd-logind[1585]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:19:11.417912 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:19:11.425916 systemd-logind[1585]: Removed session 13. Jan 17 12:19:16.412403 systemd[1]: Started sshd@13-143.198.98.155:22-139.178.68.195:59842.service - OpenSSH per-connection server daemon (139.178.68.195:59842). Jan 17 12:19:16.510992 sshd[3841]: Accepted publickey for core from 139.178.68.195 port 59842 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:16.514065 sshd[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:16.543741 systemd-logind[1585]: New session 14 of user core. Jan 17 12:19:16.549713 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:19:16.779040 sshd[3841]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:16.791452 systemd[1]: Started sshd@14-143.198.98.155:22-139.178.68.195:59854.service - OpenSSH per-connection server daemon (139.178.68.195:59854). Jan 17 12:19:16.792464 systemd[1]: sshd@13-143.198.98.155:22-139.178.68.195:59842.service: Deactivated successfully. Jan 17 12:19:16.809772 systemd-logind[1585]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:19:16.813262 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:19:16.815668 systemd-logind[1585]: Removed session 14. Jan 17 12:19:16.893353 sshd[3852]: Accepted publickey for core from 139.178.68.195 port 59854 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:16.897715 sshd[3852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:16.917434 systemd-logind[1585]: New session 15 of user core. Jan 17 12:19:16.940856 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:19:17.446663 sshd[3852]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:17.464535 systemd[1]: Started sshd@15-143.198.98.155:22-139.178.68.195:59864.service - OpenSSH per-connection server daemon (139.178.68.195:59864). Jan 17 12:19:17.465920 systemd[1]: sshd@14-143.198.98.155:22-139.178.68.195:59854.service: Deactivated successfully. Jan 17 12:19:17.474004 systemd-logind[1585]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:19:17.477685 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:19:17.485103 systemd-logind[1585]: Removed session 15. Jan 17 12:19:17.542112 sshd[3866]: Accepted publickey for core from 139.178.68.195 port 59864 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:17.545908 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:17.557140 systemd-logind[1585]: New session 16 of user core. Jan 17 12:19:17.560771 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:19:20.017273 sshd[3866]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:20.052495 systemd[1]: Started sshd@16-143.198.98.155:22-139.178.68.195:59880.service - OpenSSH per-connection server daemon (139.178.68.195:59880). Jan 17 12:19:20.053508 systemd[1]: sshd@15-143.198.98.155:22-139.178.68.195:59864.service: Deactivated successfully. Jan 17 12:19:20.058270 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:19:20.068182 systemd-logind[1585]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:19:20.076878 systemd-logind[1585]: Removed session 16. Jan 17 12:19:20.165955 sshd[3904]: Accepted publickey for core from 139.178.68.195 port 59880 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:20.168248 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:20.176929 systemd-logind[1585]: New session 17 of user core. Jan 17 12:19:20.192337 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:19:20.714290 sshd[3904]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:20.730351 systemd[1]: Started sshd@17-143.198.98.155:22-139.178.68.195:59896.service - OpenSSH per-connection server daemon (139.178.68.195:59896). Jan 17 12:19:20.734293 systemd[1]: sshd@16-143.198.98.155:22-139.178.68.195:59880.service: Deactivated successfully. Jan 17 12:19:20.749915 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:19:20.750246 systemd-logind[1585]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:19:20.757770 systemd-logind[1585]: Removed session 17. Jan 17 12:19:20.837081 sshd[3918]: Accepted publickey for core from 139.178.68.195 port 59896 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:20.843603 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:20.857515 systemd-logind[1585]: New session 18 of user core. Jan 17 12:19:20.862587 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:19:21.094740 sshd[3918]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:21.103797 systemd[1]: sshd@17-143.198.98.155:22-139.178.68.195:59896.service: Deactivated successfully. Jan 17 12:19:21.110875 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:19:21.112494 systemd-logind[1585]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:19:21.120774 systemd-logind[1585]: Removed session 18. Jan 17 12:19:26.110530 systemd[1]: Started sshd@18-143.198.98.155:22-139.178.68.195:51946.service - OpenSSH per-connection server daemon (139.178.68.195:51946). Jan 17 12:19:26.193152 sshd[3957]: Accepted publickey for core from 139.178.68.195 port 51946 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:26.196040 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:26.208131 systemd-logind[1585]: New session 19 of user core. Jan 17 12:19:26.216710 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:19:26.464803 sshd[3957]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:26.476028 systemd[1]: sshd@18-143.198.98.155:22-139.178.68.195:51946.service: Deactivated successfully. Jan 17 12:19:26.482491 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:19:26.484668 systemd-logind[1585]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:19:26.500161 systemd-logind[1585]: Removed session 19. Jan 17 12:19:31.496090 systemd[1]: Started sshd@19-143.198.98.155:22-139.178.68.195:51956.service - OpenSSH per-connection server daemon (139.178.68.195:51956). Jan 17 12:19:31.581148 sshd[3997]: Accepted publickey for core from 139.178.68.195 port 51956 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:31.584011 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:31.605371 systemd-logind[1585]: New session 20 of user core. Jan 17 12:19:31.611750 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:19:31.923408 sshd[3997]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:31.936707 systemd[1]: sshd@19-143.198.98.155:22-139.178.68.195:51956.service: Deactivated successfully. Jan 17 12:19:31.950876 systemd-logind[1585]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:19:31.951868 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:19:31.957048 systemd-logind[1585]: Removed session 20. Jan 17 12:19:33.210040 kubelet[2769]: E0117 12:19:33.208512 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:33.211682 kubelet[2769]: E0117 12:19:33.211358 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:36.928584 systemd[1]: Started sshd@20-143.198.98.155:22-139.178.68.195:44124.service - OpenSSH per-connection server daemon (139.178.68.195:44124). Jan 17 12:19:36.997576 sshd[4033]: Accepted publickey for core from 139.178.68.195 port 44124 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:37.001207 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:37.016608 systemd-logind[1585]: New session 21 of user core. Jan 17 12:19:37.025762 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:19:37.322377 sshd[4033]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:37.367128 systemd[1]: sshd@20-143.198.98.155:22-139.178.68.195:44124.service: Deactivated successfully. Jan 17 12:19:37.373731 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:19:37.392785 systemd-logind[1585]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:19:37.397384 systemd-logind[1585]: Removed session 21. Jan 17 12:19:40.208888 kubelet[2769]: E0117 12:19:40.208809 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:42.354634 systemd[1]: Started sshd@21-143.198.98.155:22-139.178.68.195:44136.service - OpenSSH per-connection server daemon (139.178.68.195:44136). Jan 17 12:19:42.440058 sshd[4068]: Accepted publickey for core from 139.178.68.195 port 44136 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:42.442848 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:42.453268 systemd-logind[1585]: New session 22 of user core. Jan 17 12:19:42.461412 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:19:42.745325 sshd[4068]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:42.756307 systemd[1]: sshd@21-143.198.98.155:22-139.178.68.195:44136.service: Deactivated successfully. Jan 17 12:19:42.763669 systemd-logind[1585]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:19:42.764328 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:19:42.769424 systemd-logind[1585]: Removed session 22.