Jan 30 14:01:14.102789 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 14:01:14.102829 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:01:14.102848 kernel: BIOS-provided physical RAM map: Jan 30 14:01:14.102858 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 14:01:14.102867 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 14:01:14.102876 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 14:01:14.102888 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 14:01:14.102898 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 14:01:14.102909 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 14:01:14.102922 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 14:01:14.102932 kernel: NX (Execute Disable) protection: active Jan 30 14:01:14.104019 kernel: APIC: Static calls initialized Jan 30 14:01:14.104043 kernel: SMBIOS 2.8 present. Jan 30 14:01:14.104057 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 14:01:14.104071 kernel: Hypervisor detected: KVM Jan 30 14:01:14.104098 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 14:01:14.104116 kernel: kvm-clock: using sched offset of 4078964425 cycles Jan 30 14:01:14.104128 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 14:01:14.104140 kernel: tsc: Detected 2000.000 MHz processor Jan 30 14:01:14.104184 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 14:01:14.104197 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 14:01:14.104208 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 14:01:14.104218 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 14:01:14.104229 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 14:01:14.104246 kernel: ACPI: Early table checksum verification disabled Jan 30 14:01:14.104259 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 14:01:14.104274 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:01:14.104285 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:01:14.104298 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:01:14.104309 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 14:01:14.104324 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:01:14.104336 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:01:14.104349 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:01:14.104367 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:01:14.104378 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 14:01:14.104388 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 14:01:14.104401 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 14:01:14.104416 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 14:01:14.104426 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 14:01:14.104436 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 14:01:14.104456 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 14:01:14.104468 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 14:01:14.104480 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 14:01:14.104492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 14:01:14.104504 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 14:01:14.104523 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 14:01:14.104534 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 14:01:14.104551 kernel: Zone ranges: Jan 30 14:01:14.104563 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 14:01:14.104575 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 14:01:14.104585 kernel: Normal empty Jan 30 14:01:14.104597 kernel: Movable zone start for each node Jan 30 14:01:14.104609 kernel: Early memory node ranges Jan 30 14:01:14.104620 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 14:01:14.104632 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 14:01:14.104642 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 14:01:14.104658 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 14:01:14.104691 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 14:01:14.104708 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 14:01:14.104719 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 14:01:14.104731 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 14:01:14.104743 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 14:01:14.104755 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 14:01:14.104768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 14:01:14.104782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 14:01:14.104800 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 14:01:14.104812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 14:01:14.104823 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 14:01:14.104835 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 14:01:14.104848 kernel: TSC deadline timer available Jan 30 14:01:14.104860 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 14:01:14.104871 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 14:01:14.104898 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 14:01:14.104916 kernel: Booting paravirtualized kernel on KVM Jan 30 14:01:14.104932 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 14:01:14.104971 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 14:01:14.104983 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 14:01:14.104993 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 14:01:14.105004 kernel: pcpu-alloc: [0] 0 1 Jan 30 14:01:14.105014 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 14:01:14.105027 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:01:14.105042 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:01:14.105060 kernel: random: crng init done Jan 30 14:01:14.105073 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:01:14.105087 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 14:01:14.105101 kernel: Fallback order for Node 0: 0 Jan 30 14:01:14.105115 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 14:01:14.105129 kernel: Policy zone: DMA32 Jan 30 14:01:14.105143 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:01:14.105157 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 30 14:01:14.105172 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:01:14.105189 kernel: Kernel/User page tables isolation: enabled Jan 30 14:01:14.105203 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 14:01:14.105216 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 14:01:14.105229 kernel: Dynamic Preempt: voluntary Jan 30 14:01:14.105243 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:01:14.105259 kernel: rcu: RCU event tracing is enabled. Jan 30 14:01:14.105273 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:01:14.105286 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:01:14.105300 kernel: Rude variant of Tasks RCU enabled. Jan 30 14:01:14.105318 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:01:14.105330 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:01:14.105345 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:01:14.105358 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 14:01:14.105370 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:01:14.105390 kernel: Console: colour VGA+ 80x25 Jan 30 14:01:14.105404 kernel: printk: console [tty0] enabled Jan 30 14:01:14.105415 kernel: printk: console [ttyS0] enabled Jan 30 14:01:14.105426 kernel: ACPI: Core revision 20230628 Jan 30 14:01:14.105439 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 14:01:14.105458 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 14:01:14.105469 kernel: x2apic enabled Jan 30 14:01:14.105480 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 14:01:14.105492 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 14:01:14.105504 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 30 14:01:14.105517 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jan 30 14:01:14.105533 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 14:01:14.105546 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 14:01:14.105578 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 14:01:14.105591 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 14:01:14.105602 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 14:01:14.105622 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 14:01:14.105635 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 14:01:14.105647 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 14:01:14.105659 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 14:01:14.105671 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 14:01:14.105684 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 14:01:14.105710 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 14:01:14.105724 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 14:01:14.105737 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 14:01:14.105749 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 14:01:14.105763 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 14:01:14.105776 kernel: Freeing SMP alternatives memory: 32K Jan 30 14:01:14.105792 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:01:14.105806 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:01:14.105826 kernel: landlock: Up and running. Jan 30 14:01:14.105840 kernel: SELinux: Initializing. Jan 30 14:01:14.105852 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 14:01:14.105867 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 14:01:14.105883 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 14:01:14.105895 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:01:14.105907 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:01:14.105918 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:01:14.105936 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 14:01:14.108109 kernel: signal: max sigframe size: 1776 Jan 30 14:01:14.108120 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:01:14.108130 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:01:14.108139 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 14:01:14.108147 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:01:14.108156 kernel: smpboot: x86: Booting SMP configuration: Jan 30 14:01:14.108164 kernel: .... node #0, CPUs: #1 Jan 30 14:01:14.108172 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:01:14.108189 kernel: smpboot: Max logical packages: 1 Jan 30 14:01:14.108208 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jan 30 14:01:14.108216 kernel: devtmpfs: initialized Jan 30 14:01:14.108225 kernel: x86/mm: Memory block size: 128MB Jan 30 14:01:14.108233 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:01:14.108241 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:01:14.108250 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:01:14.108258 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:01:14.108266 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:01:14.108275 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:01:14.108287 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 14:01:14.108295 kernel: audit: type=2000 audit(1738245671.961:1): state=initialized audit_enabled=0 res=1 Jan 30 14:01:14.108304 kernel: cpuidle: using governor menu Jan 30 14:01:14.108312 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:01:14.108320 kernel: dca service started, version 1.12.1 Jan 30 14:01:14.108329 kernel: PCI: Using configuration type 1 for base access Jan 30 14:01:14.108337 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 14:01:14.108346 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:01:14.108354 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:01:14.108365 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:01:14.108374 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:01:14.108382 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:01:14.108390 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:01:14.108398 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:01:14.108406 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 14:01:14.108414 kernel: ACPI: Interpreter enabled Jan 30 14:01:14.108423 kernel: ACPI: PM: (supports S0 S5) Jan 30 14:01:14.108431 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 14:01:14.108442 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 14:01:14.108450 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 14:01:14.108461 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 14:01:14.108477 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 14:01:14.108815 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:01:14.111300 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 14:01:14.111593 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 14:01:14.111631 kernel: acpiphp: Slot [3] registered Jan 30 14:01:14.111647 kernel: acpiphp: Slot [4] registered Jan 30 14:01:14.111662 kernel: acpiphp: Slot [5] registered Jan 30 14:01:14.111675 kernel: acpiphp: Slot [6] registered Jan 30 14:01:14.111690 kernel: acpiphp: Slot [7] registered Jan 30 14:01:14.111705 kernel: acpiphp: Slot [8] registered Jan 30 14:01:14.111716 kernel: acpiphp: Slot [9] registered Jan 30 14:01:14.111729 kernel: acpiphp: Slot [10] registered Jan 30 14:01:14.111741 kernel: acpiphp: Slot [11] registered Jan 30 14:01:14.111759 kernel: acpiphp: Slot [12] registered Jan 30 14:01:14.111773 kernel: acpiphp: Slot [13] registered Jan 30 14:01:14.111787 kernel: acpiphp: Slot [14] registered Jan 30 14:01:14.111800 kernel: acpiphp: Slot [15] registered Jan 30 14:01:14.111814 kernel: acpiphp: Slot [16] registered Jan 30 14:01:14.111828 kernel: acpiphp: Slot [17] registered Jan 30 14:01:14.111842 kernel: acpiphp: Slot [18] registered Jan 30 14:01:14.111856 kernel: acpiphp: Slot [19] registered Jan 30 14:01:14.111870 kernel: acpiphp: Slot [20] registered Jan 30 14:01:14.111885 kernel: acpiphp: Slot [21] registered Jan 30 14:01:14.111903 kernel: acpiphp: Slot [22] registered Jan 30 14:01:14.111918 kernel: acpiphp: Slot [23] registered Jan 30 14:01:14.111926 kernel: acpiphp: Slot [24] registered Jan 30 14:01:14.111935 kernel: acpiphp: Slot [25] registered Jan 30 14:01:14.111978 kernel: acpiphp: Slot [26] registered Jan 30 14:01:14.111986 kernel: acpiphp: Slot [27] registered Jan 30 14:01:14.111995 kernel: acpiphp: Slot [28] registered Jan 30 14:01:14.112003 kernel: acpiphp: Slot [29] registered Jan 30 14:01:14.112011 kernel: acpiphp: Slot [30] registered Jan 30 14:01:14.112022 kernel: acpiphp: Slot [31] registered Jan 30 14:01:14.112031 kernel: PCI host bridge to bus 0000:00 Jan 30 14:01:14.112190 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 14:01:14.112312 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 14:01:14.112424 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 14:01:14.112519 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 14:01:14.112605 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 14:01:14.112691 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 14:01:14.112821 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 14:01:14.114129 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 14:01:14.114279 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 14:01:14.114377 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 14:01:14.114479 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 14:01:14.114574 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 14:01:14.114675 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 14:01:14.114768 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 14:01:14.114887 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 14:01:14.116169 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 14:01:14.116390 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 14:01:14.116552 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 14:01:14.116722 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 14:01:14.116913 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 14:01:14.117108 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 14:01:14.117252 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 14:01:14.117357 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 14:01:14.117490 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 14:01:14.117665 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 14:01:14.117844 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 14:01:14.120189 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 14:01:14.120402 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 14:01:14.120565 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 14:01:14.120764 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 14:01:14.120932 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 14:01:14.123335 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 14:01:14.123519 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 14:01:14.123714 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 14:01:14.123865 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 14:01:14.124013 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 14:01:14.124163 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 14:01:14.124351 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 14:01:14.124452 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 14:01:14.124567 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 14:01:14.124691 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 14:01:14.124917 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 14:01:14.125067 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 14:01:14.125165 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 14:01:14.125257 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 14:01:14.125394 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 14:01:14.125501 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 14:01:14.125598 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 14:01:14.125609 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 14:01:14.125618 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 14:01:14.125626 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 14:01:14.125634 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 14:01:14.125642 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 14:01:14.125654 kernel: iommu: Default domain type: Translated Jan 30 14:01:14.125662 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 14:01:14.125671 kernel: PCI: Using ACPI for IRQ routing Jan 30 14:01:14.125679 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 14:01:14.125687 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 14:01:14.125696 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 14:01:14.125886 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 14:01:14.128191 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 14:01:14.128387 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 14:01:14.128407 kernel: vgaarb: loaded Jan 30 14:01:14.128423 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 14:01:14.128437 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 14:01:14.128451 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 14:01:14.128464 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:01:14.128482 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:01:14.128496 kernel: pnp: PnP ACPI init Jan 30 14:01:14.128510 kernel: pnp: PnP ACPI: found 4 devices Jan 30 14:01:14.128530 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 14:01:14.128544 kernel: NET: Registered PF_INET protocol family Jan 30 14:01:14.128558 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:01:14.128573 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 14:01:14.128587 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:01:14.128601 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 14:01:14.128615 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 14:01:14.128630 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 14:01:14.128645 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 14:01:14.128664 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 14:01:14.128678 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:01:14.128692 kernel: NET: Registered PF_XDP protocol family Jan 30 14:01:14.128844 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 14:01:14.129078 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 14:01:14.129211 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 14:01:14.129336 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 14:01:14.129459 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 14:01:14.129637 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 14:01:14.129797 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 14:01:14.129833 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 14:01:14.132189 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 40924 usecs Jan 30 14:01:14.132234 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:01:14.132250 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 14:01:14.132264 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 30 14:01:14.132277 kernel: Initialise system trusted keyrings Jan 30 14:01:14.132290 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 14:01:14.132316 kernel: Key type asymmetric registered Jan 30 14:01:14.132329 kernel: Asymmetric key parser 'x509' registered Jan 30 14:01:14.132341 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 14:01:14.132353 kernel: io scheduler mq-deadline registered Jan 30 14:01:14.132365 kernel: io scheduler kyber registered Jan 30 14:01:14.132378 kernel: io scheduler bfq registered Jan 30 14:01:14.132394 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 14:01:14.132410 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 14:01:14.132425 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 14:01:14.132447 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 14:01:14.132459 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:01:14.132473 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 14:01:14.132486 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 14:01:14.132500 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 14:01:14.132512 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 14:01:14.132527 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 14:01:14.132754 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 14:01:14.132916 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 14:01:14.133074 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T14:01:13 UTC (1738245673) Jan 30 14:01:14.133201 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 14:01:14.133219 kernel: intel_pstate: CPU model not supported Jan 30 14:01:14.133233 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:01:14.133246 kernel: Segment Routing with IPv6 Jan 30 14:01:14.133260 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:01:14.133272 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:01:14.133285 kernel: Key type dns_resolver registered Jan 30 14:01:14.133305 kernel: IPI shorthand broadcast: enabled Jan 30 14:01:14.133318 kernel: sched_clock: Marking stable (1444006704, 178919815)->(1710087697, -87161178) Jan 30 14:01:14.133332 kernel: registered taskstats version 1 Jan 30 14:01:14.133344 kernel: Loading compiled-in X.509 certificates Jan 30 14:01:14.133358 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 14:01:14.133372 kernel: Key type .fscrypt registered Jan 30 14:01:14.133386 kernel: Key type fscrypt-provisioning registered Jan 30 14:01:14.133399 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:01:14.133416 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:01:14.133429 kernel: ima: No architecture policies found Jan 30 14:01:14.133443 kernel: clk: Disabling unused clocks Jan 30 14:01:14.133457 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 14:01:14.133472 kernel: Write protecting the kernel read-only data: 36864k Jan 30 14:01:14.133510 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 14:01:14.133527 kernel: Run /init as init process Jan 30 14:01:14.133542 kernel: with arguments: Jan 30 14:01:14.133557 kernel: /init Jan 30 14:01:14.133574 kernel: with environment: Jan 30 14:01:14.133588 kernel: HOME=/ Jan 30 14:01:14.133602 kernel: TERM=linux Jan 30 14:01:14.133618 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:01:14.133637 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:01:14.133654 systemd[1]: Detected virtualization kvm. Jan 30 14:01:14.133668 systemd[1]: Detected architecture x86-64. Jan 30 14:01:14.133682 systemd[1]: Running in initrd. Jan 30 14:01:14.133701 systemd[1]: No hostname configured, using default hostname. Jan 30 14:01:14.133715 systemd[1]: Hostname set to . Jan 30 14:01:14.133730 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:01:14.133745 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:01:14.133759 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:01:14.133778 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:01:14.133796 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:01:14.133812 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:01:14.133829 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:01:14.133844 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:01:14.133861 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:01:14.133877 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:01:14.133892 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:01:14.133907 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:01:14.133922 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:01:14.136029 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:01:14.136078 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:01:14.136103 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:01:14.136118 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:01:14.136135 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:01:14.136154 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:01:14.136168 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:01:14.136183 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:01:14.136199 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:01:14.136214 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:01:14.136228 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:01:14.136244 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:01:14.136259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:01:14.136273 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:01:14.136292 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:01:14.136307 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:01:14.136322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:01:14.136337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:01:14.136352 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:01:14.136368 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:01:14.136382 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:01:14.136403 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:01:14.136490 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 14:01:14.136534 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:01:14.136551 systemd-journald[183]: Journal started Jan 30 14:01:14.136586 systemd-journald[183]: Runtime Journal (/run/log/journal/2d98980b8f764cb6a41cc13ea802d1ab) is 4.9M, max 39.3M, 34.4M free. Jan 30 14:01:14.113542 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 14:01:14.177975 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:01:14.178039 kernel: Bridge firewalling registered Jan 30 14:01:14.178060 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:01:14.167328 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 14:01:14.187460 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:01:14.188498 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:01:14.201314 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:01:14.207365 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:01:14.211246 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:01:14.217256 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:01:14.238372 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:01:14.247093 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:01:14.249039 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:01:14.258271 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:01:14.259460 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:01:14.266179 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:01:14.298667 systemd-resolved[216]: Positive Trust Anchors: Jan 30 14:01:14.298690 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:01:14.298741 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:01:14.302463 systemd-resolved[216]: Defaulting to hostname 'linux'. Jan 30 14:01:14.310156 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:01:14.311326 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:01:14.318176 dracut-cmdline[218]: dracut-dracut-053 Jan 30 14:01:14.336023 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:01:14.533014 kernel: SCSI subsystem initialized Jan 30 14:01:14.559443 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:01:14.579003 kernel: iscsi: registered transport (tcp) Jan 30 14:01:14.616114 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:01:14.616257 kernel: QLogic iSCSI HBA Driver Jan 30 14:01:14.735327 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:01:14.750254 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:01:14.799345 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:01:14.799441 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:01:14.801109 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:01:14.877242 kernel: raid6: avx2x4 gen() 18424 MB/s Jan 30 14:01:14.895198 kernel: raid6: avx2x2 gen() 21804 MB/s Jan 30 14:01:14.912990 kernel: raid6: avx2x1 gen() 12919 MB/s Jan 30 14:01:14.913094 kernel: raid6: using algorithm avx2x2 gen() 21804 MB/s Jan 30 14:01:14.933160 kernel: raid6: .... xor() 11895 MB/s, rmw enabled Jan 30 14:01:14.933255 kernel: raid6: using avx2x2 recovery algorithm Jan 30 14:01:14.965024 kernel: xor: automatically using best checksumming function avx Jan 30 14:01:15.177091 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:01:15.199265 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:01:15.209538 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:01:15.246109 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 30 14:01:15.255066 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:01:15.266881 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:01:15.321178 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 30 14:01:15.381090 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:01:15.392292 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:01:15.481486 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:01:15.489218 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:01:15.536670 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:01:15.541700 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:01:15.543335 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:01:15.546638 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:01:15.556466 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:01:15.589999 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 14:01:15.722464 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 14:01:15.722696 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:01:15.722720 kernel: GPT:9289727 != 125829119 Jan 30 14:01:15.722737 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:01:15.722755 kernel: GPT:9289727 != 125829119 Jan 30 14:01:15.722787 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:01:15.722798 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:01:15.722810 kernel: scsi host0: Virtio SCSI HBA Jan 30 14:01:15.723007 kernel: ACPI: bus type USB registered Jan 30 14:01:15.723019 kernel: usbcore: registered new interface driver usbfs Jan 30 14:01:15.723030 kernel: usbcore: registered new interface driver hub Jan 30 14:01:15.723041 kernel: usbcore: registered new device driver usb Jan 30 14:01:15.723052 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 14:01:15.597144 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:01:15.728043 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 14:01:15.738189 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jan 30 14:01:15.725625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:01:15.725830 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:01:15.727199 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:01:15.728379 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:01:15.728720 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:01:15.729789 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:01:15.750476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:01:15.780230 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 14:01:15.780319 kernel: AES CTR mode by8 optimization enabled Jan 30 14:01:15.832298 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 14:01:15.950652 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (456) Jan 30 14:01:15.950695 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 14:01:15.951011 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 14:01:15.951195 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 14:01:15.951375 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 14:01:15.951560 kernel: hub 1-0:1.0: USB hub found Jan 30 14:01:15.951785 kernel: hub 1-0:1.0: 2 ports detected Jan 30 14:01:15.951997 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Jan 30 14:01:15.952015 kernel: libata version 3.00 loaded. Jan 30 14:01:15.952033 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 14:01:15.952212 kernel: scsi host1: ata_piix Jan 30 14:01:15.952415 kernel: scsi host2: ata_piix Jan 30 14:01:15.952594 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 14:01:15.952612 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 14:01:15.954600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:01:15.977640 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 14:01:15.982760 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 14:01:15.983458 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 14:01:15.992883 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 14:01:16.001233 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:01:16.005239 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:01:16.031111 disk-uuid[544]: Primary Header is updated. Jan 30 14:01:16.031111 disk-uuid[544]: Secondary Entries is updated. Jan 30 14:01:16.031111 disk-uuid[544]: Secondary Header is updated. Jan 30 14:01:16.043275 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:01:16.053184 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:01:16.060167 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:01:17.063632 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:01:17.066120 disk-uuid[547]: The operation has completed successfully. Jan 30 14:01:17.156148 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:01:17.156337 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:01:17.169428 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:01:17.214752 sh[564]: Success Jan 30 14:01:17.249604 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 14:01:17.397616 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:01:17.399770 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:01:17.417645 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:01:17.463976 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 14:01:17.464078 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:01:17.473967 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:01:17.474074 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:01:17.479849 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:01:17.498359 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:01:17.500620 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:01:17.507314 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:01:17.511064 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:01:17.575125 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:01:17.579868 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:01:17.579995 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:01:17.589982 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:01:17.642090 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:01:17.646747 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:01:17.670240 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:01:17.683270 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:01:17.804308 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:01:17.817444 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:01:17.896432 systemd-networkd[750]: lo: Link UP Jan 30 14:01:17.897380 systemd-networkd[750]: lo: Gained carrier Jan 30 14:01:17.900829 systemd-networkd[750]: Enumeration completed Jan 30 14:01:17.901267 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:01:17.901489 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 14:01:17.901495 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 14:01:17.902729 systemd[1]: Reached target network.target - Network. Jan 30 14:01:17.902849 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:01:17.902855 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:01:17.903835 systemd-networkd[750]: eth0: Link UP Jan 30 14:01:17.903842 systemd-networkd[750]: eth0: Gained carrier Jan 30 14:01:17.903855 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 14:01:17.908890 systemd-networkd[750]: eth1: Link UP Jan 30 14:01:17.908901 systemd-networkd[750]: eth1: Gained carrier Jan 30 14:01:17.908925 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:01:17.926686 ignition[676]: Ignition 2.19.0 Jan 30 14:01:17.926711 ignition[676]: Stage: fetch-offline Jan 30 14:01:17.928060 systemd-networkd[750]: eth0: DHCPv4 address 146.190.128.120/20, gateway 146.190.128.1 acquired from 169.254.169.253 Jan 30 14:01:17.926804 ignition[676]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:17.930077 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:01:17.926826 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:01:17.934139 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.7/20 acquired from 169.254.169.253 Jan 30 14:01:17.927145 ignition[676]: parsed url from cmdline: "" Jan 30 14:01:17.941231 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:01:17.927155 ignition[676]: no config URL provided Jan 30 14:01:17.927171 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:01:17.927191 ignition[676]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:01:17.927201 ignition[676]: failed to fetch config: resource requires networking Jan 30 14:01:17.927685 ignition[676]: Ignition finished successfully Jan 30 14:01:17.992435 ignition[758]: Ignition 2.19.0 Jan 30 14:01:17.992451 ignition[758]: Stage: fetch Jan 30 14:01:17.992924 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:17.992970 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:01:17.993193 ignition[758]: parsed url from cmdline: "" Jan 30 14:01:17.993200 ignition[758]: no config URL provided Jan 30 14:01:17.993217 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:01:17.993239 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:01:17.993296 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 14:01:18.015800 ignition[758]: GET result: OK Jan 30 14:01:18.016047 ignition[758]: parsing config with SHA512: 61f3de977bb293cb37385cba2ac728776582d27bab58cd770cb99aa4e9e22b9e835a8ba1e6ba15346ff29389b4b364cb058e0c884401dc1b02cba16a936f7315 Jan 30 14:01:18.027206 unknown[758]: fetched base config from "system" Jan 30 14:01:18.027237 unknown[758]: fetched base config from "system" Jan 30 14:01:18.028208 ignition[758]: fetch: fetch complete Jan 30 14:01:18.027246 unknown[758]: fetched user config from "digitalocean" Jan 30 14:01:18.028218 ignition[758]: fetch: fetch passed Jan 30 14:01:18.030696 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:01:18.028293 ignition[758]: Ignition finished successfully Jan 30 14:01:18.039308 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:01:18.079368 ignition[766]: Ignition 2.19.0 Jan 30 14:01:18.079391 ignition[766]: Stage: kargs Jan 30 14:01:18.080565 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:18.080586 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:01:18.082150 ignition[766]: kargs: kargs passed Jan 30 14:01:18.084440 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:01:18.082219 ignition[766]: Ignition finished successfully Jan 30 14:01:18.095407 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:01:18.127614 ignition[773]: Ignition 2.19.0 Jan 30 14:01:18.127628 ignition[773]: Stage: disks Jan 30 14:01:18.128065 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:18.128117 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:01:18.131748 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:01:18.129879 ignition[773]: disks: disks passed Jan 30 14:01:18.130014 ignition[773]: Ignition finished successfully Jan 30 14:01:18.139678 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:01:18.141223 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:01:18.142408 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:01:18.143652 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:01:18.145475 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:01:18.157474 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:01:18.182138 systemd-fsck[782]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 14:01:18.188473 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:01:18.203258 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:01:18.356065 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 14:01:18.356329 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:01:18.358437 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:01:18.370384 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:01:18.374741 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:01:18.386268 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 14:01:18.393134 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:01:18.402597 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (790) Jan 30 14:01:18.402644 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:01:18.402665 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:01:18.402685 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:01:18.401696 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:01:18.401756 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:01:18.412121 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:01:18.419369 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:01:18.445171 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:01:18.450244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:01:18.526250 coreos-metadata[793]: Jan 30 14:01:18.526 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 14:01:18.544850 coreos-metadata[792]: Jan 30 14:01:18.544 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 14:01:18.566974 coreos-metadata[793]: Jan 30 14:01:18.564 INFO Fetch successful Jan 30 14:01:18.569338 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:01:18.573733 coreos-metadata[792]: Jan 30 14:01:18.573 INFO Fetch successful Jan 30 14:01:18.574735 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 14:01:18.574957 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 14:01:18.582442 coreos-metadata[793]: Jan 30 14:01:18.581 INFO wrote hostname ci-4081.3.0-b-f874540adc to /sysroot/etc/hostname Jan 30 14:01:18.583612 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:01:18.586398 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:01:18.597551 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:01:18.605831 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:01:18.831202 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:01:18.840179 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:01:18.847139 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:01:18.859317 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:01:18.862095 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:01:18.911705 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:01:18.936384 ignition[910]: INFO : Ignition 2.19.0 Jan 30 14:01:18.936384 ignition[910]: INFO : Stage: mount Jan 30 14:01:18.936384 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:18.936384 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:01:18.936384 ignition[910]: INFO : mount: mount passed Jan 30 14:01:18.936384 ignition[910]: INFO : Ignition finished successfully Jan 30 14:01:18.939375 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:01:18.954134 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:01:18.992328 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:01:19.012290 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (922) Jan 30 14:01:19.022085 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:01:19.022391 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:01:19.022418 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:01:19.028053 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:01:19.042078 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:01:19.102643 ignition[939]: INFO : Ignition 2.19.0 Jan 30 14:01:19.102643 ignition[939]: INFO : Stage: files Jan 30 14:01:19.106090 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:19.106090 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:01:19.110495 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:01:19.110495 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:01:19.110495 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:01:19.126974 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:01:19.128326 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:01:19.128326 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:01:19.128058 unknown[939]: wrote ssh authorized keys file for user: core Jan 30 14:01:19.132284 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 14:01:19.132284 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 14:01:19.132284 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:01:19.132284 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 14:01:19.185847 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 14:01:19.281538 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:01:19.281538 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:01:19.287296 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:01:19.287296 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:01:19.287296 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:01:19.287296 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:01:19.287296 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:01:19.287296 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:01:19.287296 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:01:19.287296 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:01:19.298019 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:01:19.298019 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:01:19.298019 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:01:19.298019 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:01:19.298019 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 14:01:19.464330 systemd-networkd[750]: eth1: Gained IPv6LL Jan 30 14:01:19.720440 systemd-networkd[750]: eth0: Gained IPv6LL Jan 30 14:01:19.771914 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 14:01:20.317670 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:01:20.317670 ignition[939]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 14:01:20.323633 ignition[939]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 14:01:20.323633 ignition[939]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 14:01:20.323633 ignition[939]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 14:01:20.323633 ignition[939]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 14:01:20.323633 ignition[939]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:01:20.323633 ignition[939]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:01:20.323633 ignition[939]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 14:01:20.323633 ignition[939]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:01:20.323633 ignition[939]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:01:20.323633 ignition[939]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:01:20.343610 ignition[939]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:01:20.343610 ignition[939]: INFO : files: files passed Jan 30 14:01:20.343610 ignition[939]: INFO : Ignition finished successfully Jan 30 14:01:20.327385 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:01:20.341446 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:01:20.349286 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:01:20.354233 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:01:20.354435 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:01:20.397091 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:01:20.397091 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:01:20.415125 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:01:20.409643 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:01:20.411916 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:01:20.422627 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:01:20.507015 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:01:20.507189 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:01:20.509705 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:01:20.511225 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:01:20.513340 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:01:20.520452 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:01:20.558591 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:01:20.567402 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:01:20.599140 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:01:20.600398 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:01:20.602264 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:01:20.604067 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:01:20.604275 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:01:20.606680 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:01:20.608459 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:01:20.610838 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:01:20.612062 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:01:20.613801 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:01:20.616009 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:01:20.617301 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:01:20.618828 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:01:20.620652 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:01:20.622297 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:01:20.623769 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:01:20.624017 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:01:20.625519 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:01:20.626702 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:01:20.628045 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:01:20.628186 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:01:20.629580 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:01:20.629867 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:01:20.631616 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:01:20.631877 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:01:20.633813 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:01:20.634081 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:01:20.635195 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:01:20.635373 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:01:20.646552 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:01:20.682653 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:01:20.683426 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:01:20.683693 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:01:20.687503 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:01:20.687734 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:01:20.705199 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:01:20.707311 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:01:20.707530 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:01:20.744517 ignition[991]: INFO : Ignition 2.19.0 Jan 30 14:01:20.747644 ignition[991]: INFO : Stage: umount Jan 30 14:01:20.747644 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:20.747644 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:01:20.747644 ignition[991]: INFO : umount: umount passed Jan 30 14:01:20.747644 ignition[991]: INFO : Ignition finished successfully Jan 30 14:01:20.744386 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:01:20.755278 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:01:20.756618 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:01:20.756852 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:01:20.759168 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:01:20.759406 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:01:20.767313 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:01:20.767476 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:01:20.768430 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:01:20.768500 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:01:20.769419 systemd[1]: Stopped target network.target - Network. Jan 30 14:01:20.770189 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:01:20.770322 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:01:20.771380 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:01:20.772234 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:01:20.779162 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:01:20.781215 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:01:20.786307 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:01:20.787405 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:01:20.787492 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:01:20.788117 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:01:20.788158 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:01:20.788955 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:01:20.789050 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:01:20.789639 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:01:20.789685 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:01:20.790254 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:01:20.790301 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:01:20.791125 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:01:20.791772 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:01:20.797180 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 30 14:01:20.810833 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:01:20.811063 systemd-networkd[750]: eth1: DHCPv6 lease lost Jan 30 14:01:20.812714 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:01:20.817359 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:01:20.817923 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:01:20.829024 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:01:20.829192 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:01:20.841489 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:01:20.843202 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:01:20.844274 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:01:20.845837 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:01:20.845996 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:01:20.850577 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:01:20.850726 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:01:20.854273 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:01:20.854390 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:01:20.855466 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:01:20.871810 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:01:20.872234 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:01:20.881773 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:01:20.883566 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:01:20.887369 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:01:20.887482 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:01:20.890442 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:01:20.890587 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:01:20.894034 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:01:20.894183 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:01:20.896524 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:01:20.896668 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:01:20.913471 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:01:20.914415 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:01:20.914570 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:01:20.933657 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 14:01:20.933790 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:01:20.935673 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:01:20.935809 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:01:20.938988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:01:20.939099 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:01:20.942373 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:01:20.942641 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:01:20.954716 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:01:20.954992 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:01:20.959618 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:01:20.977774 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:01:21.008428 systemd[1]: Switching root. Jan 30 14:01:21.065354 systemd-journald[183]: Journal stopped Jan 30 14:01:23.448597 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 14:01:23.448767 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:01:23.448809 kernel: SELinux: policy capability open_perms=1 Jan 30 14:01:23.448828 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:01:23.448863 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:01:23.448885 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:01:23.448908 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:01:23.448929 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:01:23.448996 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:01:23.449025 kernel: audit: type=1403 audit(1738245681.606:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:01:23.449050 systemd[1]: Successfully loaded SELinux policy in 69.670ms. Jan 30 14:01:23.449087 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.587ms. Jan 30 14:01:23.449113 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:01:23.449133 systemd[1]: Detected virtualization kvm. Jan 30 14:01:23.449157 systemd[1]: Detected architecture x86-64. Jan 30 14:01:23.449179 systemd[1]: Detected first boot. Jan 30 14:01:23.449202 systemd[1]: Hostname set to . Jan 30 14:01:23.449239 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:01:23.449263 zram_generator::config[1051]: No configuration found. Jan 30 14:01:23.449285 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:01:23.449310 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:01:23.449333 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 14:01:23.449356 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:01:23.449380 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:01:23.449404 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:01:23.449433 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:01:23.449463 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:01:23.449487 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:01:23.449511 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:01:23.449534 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:01:23.449557 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:01:23.449580 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:01:23.449603 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:01:23.449626 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:01:23.449654 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:01:23.449681 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:01:23.449705 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 14:01:23.449727 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:01:23.449750 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:01:23.449772 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:01:23.449797 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:01:23.449822 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:01:23.449846 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:01:23.449869 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:01:23.449893 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:01:23.452048 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:01:23.452115 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:01:23.452140 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:01:23.452164 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:01:23.452186 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:01:23.452222 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:01:23.452241 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:01:23.452261 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:01:23.452281 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:01:23.452305 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:01:23.452329 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:01:23.452354 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:01:23.452375 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:01:23.452395 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:01:23.452425 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:01:23.452450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:01:23.452474 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:01:23.452498 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:01:23.452522 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:01:23.452545 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:01:23.452571 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:01:23.452602 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:01:23.452630 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:01:23.452656 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 14:01:23.452679 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 14:01:23.452715 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:01:23.452734 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:01:23.452753 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:01:23.452774 kernel: fuse: init (API version 7.39) Jan 30 14:01:23.452801 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:01:23.452829 kernel: loop: module loaded Jan 30 14:01:23.452855 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:01:23.452880 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:01:23.452904 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:01:23.452922 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:01:23.453021 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:01:23.453050 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:01:23.453074 kernel: ACPI: bus type drm_connector registered Jan 30 14:01:23.453165 systemd-journald[1146]: Collecting audit messages is disabled. Jan 30 14:01:23.453228 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:01:23.453252 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:01:23.453274 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:01:23.453297 systemd-journald[1146]: Journal started Jan 30 14:01:23.453349 systemd-journald[1146]: Runtime Journal (/run/log/journal/2d98980b8f764cb6a41cc13ea802d1ab) is 4.9M, max 39.3M, 34.4M free. Jan 30 14:01:23.456004 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:01:23.462081 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:01:23.463727 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:01:23.464125 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:01:23.465549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:01:23.465864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:01:23.467127 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:01:23.467387 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:01:23.468924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:01:23.469398 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:01:23.471205 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:01:23.471512 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:01:23.473308 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:01:23.473765 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:01:23.475351 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:01:23.477333 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:01:23.478688 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:01:23.501817 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:01:23.510225 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:01:23.521156 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:01:23.523095 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:01:23.538671 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:01:23.562470 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:01:23.564519 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:01:23.580040 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:01:23.584052 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:01:23.594542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:01:23.610174 systemd-journald[1146]: Time spent on flushing to /var/log/journal/2d98980b8f764cb6a41cc13ea802d1ab is 83.520ms for 973 entries. Jan 30 14:01:23.610174 systemd-journald[1146]: System Journal (/var/log/journal/2d98980b8f764cb6a41cc13ea802d1ab) is 8.0M, max 195.6M, 187.6M free. Jan 30 14:01:23.729506 systemd-journald[1146]: Received client request to flush runtime journal. Jan 30 14:01:23.608278 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:01:23.623657 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:01:23.624881 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:01:23.629543 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:01:23.654443 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:01:23.661527 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:01:23.666467 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:01:23.737683 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:01:23.752173 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 14:01:23.762628 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:01:23.781049 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 30 14:01:23.781709 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 30 14:01:23.793807 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:01:23.807468 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:01:23.861217 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:01:23.877629 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:01:23.932587 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 30 14:01:23.933239 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 30 14:01:23.942875 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:01:24.949773 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:01:24.976485 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:01:25.028797 systemd-udevd[1222]: Using default interface naming scheme 'v255'. Jan 30 14:01:25.099188 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:01:25.120415 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:01:25.187340 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:01:25.268782 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:01:25.269057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:01:25.278242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:01:25.290621 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1230) Jan 30 14:01:25.296247 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:01:25.343282 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:01:25.345226 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:01:25.345287 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:01:25.345338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:01:25.345613 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:01:25.348187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:01:25.348501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:01:25.369033 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 14:01:25.410862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:01:25.411195 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:01:25.416916 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:01:25.417575 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:01:25.424108 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:01:25.424281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:01:25.557207 systemd-networkd[1226]: lo: Link UP Jan 30 14:01:25.557227 systemd-networkd[1226]: lo: Gained carrier Jan 30 14:01:25.561772 systemd-networkd[1226]: Enumeration completed Jan 30 14:01:25.562393 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:01:25.562496 systemd-networkd[1226]: eth0: Configuring with /run/systemd/network/10-e6:f1:19:bf:d3:bb.network. Jan 30 14:01:25.563723 systemd-networkd[1226]: eth1: Configuring with /run/systemd/network/10-ce:b5:a4:d8:71:ad.network. Jan 30 14:01:25.564852 systemd-networkd[1226]: eth0: Link UP Jan 30 14:01:25.564861 systemd-networkd[1226]: eth0: Gained carrier Jan 30 14:01:25.570523 systemd-networkd[1226]: eth1: Link UP Jan 30 14:01:25.570544 systemd-networkd[1226]: eth1: Gained carrier Jan 30 14:01:25.573355 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:01:25.613006 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 14:01:25.639995 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 14:01:25.649978 kernel: ACPI: button: Power Button [PWRF] Jan 30 14:01:25.682990 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 14:01:25.682017 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 14:01:25.745301 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:01:25.772012 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 14:01:25.774993 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 14:01:25.781329 kernel: Console: switching to colour dummy device 80x25 Jan 30 14:01:25.781463 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 14:01:25.781491 kernel: [drm] features: -context_init Jan 30 14:01:25.784862 kernel: [drm] number of scanouts: 1 Jan 30 14:01:25.785022 kernel: [drm] number of cap sets: 0 Jan 30 14:01:25.786988 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 14:01:25.791823 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 14:01:25.792014 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:01:25.803499 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:01:25.810182 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 14:01:25.842281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:01:25.842654 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:01:25.861545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:01:25.903448 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:01:25.905783 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:01:25.935757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:01:26.135254 kernel: EDAC MC: Ver: 3.0.0 Jan 30 14:01:26.167180 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:01:26.179606 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:01:26.199146 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:01:26.235095 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:01:26.296809 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:01:26.304391 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:01:26.315528 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:01:26.351502 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:01:26.393762 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:01:26.400483 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:01:26.412267 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 14:01:26.412585 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:01:26.412798 systemd[1]: Reached target machines.target - Containers. Jan 30 14:01:26.427667 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:01:26.460566 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:01:26.477785 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 14:01:26.482626 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 14:01:26.490491 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:01:26.496895 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:01:26.505269 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:01:26.524564 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:01:26.525655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:01:26.550289 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:01:26.569437 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:01:26.577379 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:01:26.618220 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 14:01:26.669421 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:01:26.674458 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:01:26.679111 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:01:26.712711 kernel: loop1: detected capacity change from 0 to 8 Jan 30 14:01:26.758060 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 14:01:26.761611 systemd-networkd[1226]: eth0: Gained IPv6LL Jan 30 14:01:26.779596 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:01:26.824413 systemd-networkd[1226]: eth1: Gained IPv6LL Jan 30 14:01:26.832037 kernel: loop3: detected capacity change from 0 to 140768 Jan 30 14:01:26.947631 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 14:01:26.993211 kernel: loop5: detected capacity change from 0 to 8 Jan 30 14:01:26.998575 kernel: loop6: detected capacity change from 0 to 210664 Jan 30 14:01:27.028077 kernel: loop7: detected capacity change from 0 to 140768 Jan 30 14:01:27.063286 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 14:01:27.067063 (sd-merge)[1317]: Merged extensions into '/usr'. Jan 30 14:01:27.078254 systemd[1]: Reloading requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:01:27.078290 systemd[1]: Reloading... Jan 30 14:01:27.296040 zram_generator::config[1347]: No configuration found. Jan 30 14:01:27.632992 ldconfig[1303]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:01:27.638857 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:01:27.785158 systemd[1]: Reloading finished in 706 ms. Jan 30 14:01:27.812364 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:01:27.820189 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:01:27.841482 systemd[1]: Starting ensure-sysext.service... Jan 30 14:01:27.847317 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:01:27.865291 systemd[1]: Reloading requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:01:27.865332 systemd[1]: Reloading... Jan 30 14:01:27.941795 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:01:27.942486 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:01:27.944086 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:01:27.944571 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jan 30 14:01:27.944707 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jan 30 14:01:27.949582 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:01:27.949604 systemd-tmpfiles[1395]: Skipping /boot Jan 30 14:01:27.971208 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:01:27.971232 systemd-tmpfiles[1395]: Skipping /boot Jan 30 14:01:28.054221 zram_generator::config[1422]: No configuration found. Jan 30 14:01:28.330668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:01:28.438850 systemd[1]: Reloading finished in 573 ms. Jan 30 14:01:28.489652 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:01:28.506439 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:01:28.514277 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:01:28.534738 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:01:28.550344 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:01:28.576715 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:01:28.602655 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:01:28.602917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:01:28.614851 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:01:28.638771 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:01:28.670975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:01:28.681071 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:01:28.681425 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:01:28.703415 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:01:28.704841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:01:28.705263 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:01:28.705450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:01:28.707203 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:01:28.726819 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:01:28.727561 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:01:28.736911 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:01:28.738767 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:01:28.741529 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:01:28.744505 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:01:28.748872 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:01:28.750137 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:01:28.806363 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:01:28.806652 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:01:28.817851 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:01:28.818419 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:01:28.819503 augenrules[1499]: No rules Jan 30 14:01:28.826630 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:01:28.831271 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:01:28.831505 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:01:28.846909 systemd[1]: Finished ensure-sysext.service. Jan 30 14:01:28.877601 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:01:28.885839 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:01:28.887120 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:01:28.897453 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 14:01:28.922652 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:01:28.928727 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:01:28.953392 systemd-resolved[1478]: Positive Trust Anchors: Jan 30 14:01:28.953984 systemd-resolved[1478]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:01:28.954092 systemd-resolved[1478]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:01:28.960841 systemd-resolved[1478]: Using system hostname 'ci-4081.3.0-b-f874540adc'. Jan 30 14:01:28.964651 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:01:28.965998 systemd[1]: Reached target network.target - Network. Jan 30 14:01:28.969525 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:01:28.970596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:01:29.017327 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:01:29.076280 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 14:01:29.077396 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:01:29.079027 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:01:29.082511 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:01:29.085435 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:01:29.087632 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:01:29.087862 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:01:29.089158 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:01:29.091226 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:01:29.092851 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:01:29.094529 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:01:29.097920 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:01:29.109839 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:01:29.117398 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:01:29.123918 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:01:29.128033 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:01:29.128983 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:01:29.130303 systemd[1]: System is tainted: cgroupsv1 Jan 30 14:01:29.130373 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:01:29.130407 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:01:29.146298 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:01:29.151854 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:01:29.152002 systemd-timesyncd[1523]: Contacted time server 135.148.100.14:123 (0.flatcar.pool.ntp.org). Jan 30 14:01:29.152091 systemd-timesyncd[1523]: Initial clock synchronization to Thu 2025-01-30 14:01:29.156445 UTC. Jan 30 14:01:29.172397 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:01:29.178123 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:01:29.204866 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:01:29.233530 jq[1534]: false Jan 30 14:01:29.240191 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:01:29.240914 coreos-metadata[1531]: Jan 30 14:01:29.240 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 14:01:29.253249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:29.263507 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:01:29.285173 coreos-metadata[1531]: Jan 30 14:01:29.280 INFO Fetch successful Jan 30 14:01:29.293324 dbus-daemon[1533]: [system] SELinux support is enabled Jan 30 14:01:29.294905 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:01:29.326139 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:01:29.333130 extend-filesystems[1537]: Found loop4 Jan 30 14:01:29.333130 extend-filesystems[1537]: Found loop5 Jan 30 14:01:29.333130 extend-filesystems[1537]: Found loop6 Jan 30 14:01:29.333130 extend-filesystems[1537]: Found loop7 Jan 30 14:01:29.333130 extend-filesystems[1537]: Found vda Jan 30 14:01:29.333130 extend-filesystems[1537]: Found vda1 Jan 30 14:01:29.333130 extend-filesystems[1537]: Found vda2 Jan 30 14:01:29.333130 extend-filesystems[1537]: Found vda3 Jan 30 14:01:29.333130 extend-filesystems[1537]: Found usr Jan 30 14:01:29.333130 extend-filesystems[1537]: Found vda4 Jan 30 14:01:29.333130 extend-filesystems[1537]: Found vda6 Jan 30 14:01:29.333130 extend-filesystems[1537]: Found vda7 Jan 30 14:01:29.333130 extend-filesystems[1537]: Found vda9 Jan 30 14:01:29.333130 extend-filesystems[1537]: Checking size of /dev/vda9 Jan 30 14:01:29.368360 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:01:29.482305 extend-filesystems[1537]: Resized partition /dev/vda9 Jan 30 14:01:29.427306 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:01:29.467338 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:01:29.476197 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:01:29.492431 extend-filesystems[1564]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:01:29.497857 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 14:01:29.492394 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:01:29.504429 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:01:29.512128 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:01:29.554665 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:01:29.559324 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:01:29.576445 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:01:29.579151 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:01:29.616885 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:01:29.625855 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:01:29.643009 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1573) Jan 30 14:01:29.648474 jq[1567]: true Jan 30 14:01:29.698896 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 14:01:29.740457 update_engine[1565]: I20250130 14:01:29.736871 1565 main.cc:92] Flatcar Update Engine starting Jan 30 14:01:29.764051 update_engine[1565]: I20250130 14:01:29.757114 1565 update_check_scheduler.cc:74] Next update check in 11m9s Jan 30 14:01:29.761852 (ntainerd)[1584]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:01:29.765007 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 14:01:29.765007 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 14:01:29.765007 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 14:01:29.859994 jq[1586]: true Jan 30 14:01:29.860421 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Jan 30 14:01:29.860421 extend-filesystems[1537]: Found vdb Jan 30 14:01:29.776990 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:01:29.777453 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:01:29.816708 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:01:29.941013 tar[1577]: linux-amd64/helm Jan 30 14:01:29.942719 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:01:29.950837 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:01:29.963620 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:01:29.963699 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:01:29.964775 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:01:29.964919 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 14:01:29.965011 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:01:29.967625 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:01:29.967878 systemd-logind[1562]: New seat seat0. Jan 30 14:01:29.997259 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:01:30.027886 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:01:30.035800 systemd-logind[1562]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 14:01:30.036104 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:01:30.043582 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 14:01:30.044253 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:01:30.402224 bash[1632]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:01:30.408724 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:01:30.426869 systemd[1]: Starting sshkeys.service... Jan 30 14:01:30.547791 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:01:30.568297 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:01:30.720744 sshd_keygen[1590]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:01:30.813420 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:01:30.833389 coreos-metadata[1640]: Jan 30 14:01:30.832 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 14:01:30.833751 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:01:30.874099 coreos-metadata[1640]: Jan 30 14:01:30.870 INFO Fetch successful Jan 30 14:01:30.877423 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:01:30.895775 systemd[1]: Started sshd@0-146.190.128.120:22-147.75.109.163:46910.service - OpenSSH per-connection server daemon (147.75.109.163:46910). Jan 30 14:01:30.929302 unknown[1640]: wrote ssh authorized keys file for user: core Jan 30 14:01:31.052379 update-ssh-keys[1660]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:01:31.064497 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:01:31.080577 systemd[1]: Finished sshkeys.service. Jan 30 14:01:31.123808 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:01:31.124439 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:01:31.156592 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:01:31.278238 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:01:31.299648 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:01:31.331219 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 14:01:31.340687 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:01:31.413171 containerd[1584]: time="2025-01-30T14:01:31.412316727Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:01:31.508152 containerd[1584]: time="2025-01-30T14:01:31.506223888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:31.519418 containerd[1584]: time="2025-01-30T14:01:31.518617690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:01:31.519418 containerd[1584]: time="2025-01-30T14:01:31.518732335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:01:31.519418 containerd[1584]: time="2025-01-30T14:01:31.518765886Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:01:31.519418 containerd[1584]: time="2025-01-30T14:01:31.519117328Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:01:31.519418 containerd[1584]: time="2025-01-30T14:01:31.519152517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:31.519418 containerd[1584]: time="2025-01-30T14:01:31.519264066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:01:31.519418 containerd[1584]: time="2025-01-30T14:01:31.519287301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:31.524029 containerd[1584]: time="2025-01-30T14:01:31.523436567Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:01:31.524029 containerd[1584]: time="2025-01-30T14:01:31.523507999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:31.524029 containerd[1584]: time="2025-01-30T14:01:31.523537528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:01:31.524029 containerd[1584]: time="2025-01-30T14:01:31.523555165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:31.524029 containerd[1584]: time="2025-01-30T14:01:31.523848501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:31.527817 containerd[1584]: time="2025-01-30T14:01:31.524804748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:31.527817 containerd[1584]: time="2025-01-30T14:01:31.525210269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:01:31.527817 containerd[1584]: time="2025-01-30T14:01:31.525258567Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:01:31.527817 containerd[1584]: time="2025-01-30T14:01:31.527151926Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:01:31.527817 containerd[1584]: time="2025-01-30T14:01:31.527369548Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:01:31.544260 sshd[1658]: Accepted publickey for core from 147.75.109.163 port 46910 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:31.550751 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:31.582808 containerd[1584]: time="2025-01-30T14:01:31.579798982Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:01:31.582808 containerd[1584]: time="2025-01-30T14:01:31.580094968Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:01:31.582808 containerd[1584]: time="2025-01-30T14:01:31.580808615Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:01:31.582808 containerd[1584]: time="2025-01-30T14:01:31.580873591Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:01:31.582808 containerd[1584]: time="2025-01-30T14:01:31.581070652Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:01:31.582808 containerd[1584]: time="2025-01-30T14:01:31.581537938Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:01:31.585919 containerd[1584]: time="2025-01-30T14:01:31.583985653Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:01:31.586675 containerd[1584]: time="2025-01-30T14:01:31.586621147Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:01:31.587334 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:01:31.587646 containerd[1584]: time="2025-01-30T14:01:31.587599145Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:01:31.587772 containerd[1584]: time="2025-01-30T14:01:31.587748437Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:01:31.587900 containerd[1584]: time="2025-01-30T14:01:31.587878399Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.587996659Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588025923Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588047395Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588065993Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588088559Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588104303Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588138163Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588172493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588637551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588685366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588706478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588735799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588758760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590003 containerd[1584]: time="2025-01-30T14:01:31.588779809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.588803503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.588832211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.588867769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.588896242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.589033217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.589062030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.589095065Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.589176462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.589200059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.589221278Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.589311995Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.589382249Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:01:31.590709 containerd[1584]: time="2025-01-30T14:01:31.589407254Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:01:31.591282 containerd[1584]: time="2025-01-30T14:01:31.589428479Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:01:31.591282 containerd[1584]: time="2025-01-30T14:01:31.589447264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.591282 containerd[1584]: time="2025-01-30T14:01:31.589474924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:01:31.591282 containerd[1584]: time="2025-01-30T14:01:31.589503732Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:01:31.591282 containerd[1584]: time="2025-01-30T14:01:31.589524817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:01:31.594123 containerd[1584]: time="2025-01-30T14:01:31.593582276Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:01:31.594123 containerd[1584]: time="2025-01-30T14:01:31.593702347Z" level=info msg="Connect containerd service" Jan 30 14:01:31.594123 containerd[1584]: time="2025-01-30T14:01:31.593801332Z" level=info msg="using legacy CRI server" Jan 30 14:01:31.594123 containerd[1584]: time="2025-01-30T14:01:31.593817660Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:01:31.597539 containerd[1584]: time="2025-01-30T14:01:31.596010901Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:01:31.610289 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:01:31.619998 containerd[1584]: time="2025-01-30T14:01:31.619179689Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:01:31.628007 containerd[1584]: time="2025-01-30T14:01:31.626771740Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:01:31.628007 containerd[1584]: time="2025-01-30T14:01:31.626924762Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:01:31.628007 containerd[1584]: time="2025-01-30T14:01:31.627160830Z" level=info msg="Start subscribing containerd event" Jan 30 14:01:31.628007 containerd[1584]: time="2025-01-30T14:01:31.627401174Z" level=info msg="Start recovering state" Jan 30 14:01:31.628007 containerd[1584]: time="2025-01-30T14:01:31.627556791Z" level=info msg="Start event monitor" Jan 30 14:01:31.628007 containerd[1584]: time="2025-01-30T14:01:31.627589047Z" level=info msg="Start snapshots syncer" Jan 30 14:01:31.628007 containerd[1584]: time="2025-01-30T14:01:31.627608900Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:01:31.628007 containerd[1584]: time="2025-01-30T14:01:31.627626006Z" level=info msg="Start streaming server" Jan 30 14:01:31.628007 containerd[1584]: time="2025-01-30T14:01:31.627828281Z" level=info msg="containerd successfully booted in 0.217186s" Jan 30 14:01:31.642764 systemd-logind[1562]: New session 1 of user core. Jan 30 14:01:31.645508 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:01:31.708507 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:01:31.727994 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:01:31.787653 (systemd)[1686]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:01:32.101736 systemd[1686]: Queued start job for default target default.target. Jan 30 14:01:32.102486 systemd[1686]: Created slice app.slice - User Application Slice. Jan 30 14:01:32.102534 systemd[1686]: Reached target paths.target - Paths. Jan 30 14:01:32.102556 systemd[1686]: Reached target timers.target - Timers. Jan 30 14:01:32.113315 systemd[1686]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:01:32.171523 systemd[1686]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:01:32.171706 systemd[1686]: Reached target sockets.target - Sockets. Jan 30 14:01:32.171740 systemd[1686]: Reached target basic.target - Basic System. Jan 30 14:01:32.171856 systemd[1686]: Reached target default.target - Main User Target. Jan 30 14:01:32.171916 systemd[1686]: Startup finished in 364ms. Jan 30 14:01:32.175530 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:01:32.195508 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:01:32.350690 systemd[1]: Started sshd@1-146.190.128.120:22-147.75.109.163:46918.service - OpenSSH per-connection server daemon (147.75.109.163:46918). Jan 30 14:01:32.517158 tar[1577]: linux-amd64/LICENSE Jan 30 14:01:32.517158 tar[1577]: linux-amd64/README.md Jan 30 14:01:32.564574 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 46918 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:32.571238 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:32.585892 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:01:32.609434 systemd-logind[1562]: New session 2 of user core. Jan 30 14:01:32.615998 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:01:32.712316 sshd[1698]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:32.736404 systemd[1]: sshd@1-146.190.128.120:22-147.75.109.163:46918.service: Deactivated successfully. Jan 30 14:01:32.745102 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:01:32.749231 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:01:32.771058 systemd[1]: Started sshd@2-146.190.128.120:22-147.75.109.163:46920.service - OpenSSH per-connection server daemon (147.75.109.163:46920). Jan 30 14:01:32.860365 systemd-logind[1562]: Removed session 2. Jan 30 14:01:32.951669 sshd[1711]: Accepted publickey for core from 147.75.109.163 port 46920 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:32.954763 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:32.966203 systemd-logind[1562]: New session 3 of user core. Jan 30 14:01:32.971028 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:01:33.092488 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:33.100736 systemd[1]: sshd@2-146.190.128.120:22-147.75.109.163:46920.service: Deactivated successfully. Jan 30 14:01:33.105827 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:01:33.110725 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:01:33.114482 systemd-logind[1562]: Removed session 3. Jan 30 14:01:33.608099 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:01:33.609778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:33.619568 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:01:33.630128 systemd[1]: Startup finished in 9.445s (kernel) + 12.091s (userspace) = 21.537s. Jan 30 14:01:35.120351 kubelet[1725]: E0130 14:01:35.107012 1725 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:01:35.126780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:01:35.128309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:01:43.111719 systemd[1]: Started sshd@3-146.190.128.120:22-147.75.109.163:48958.service - OpenSSH per-connection server daemon (147.75.109.163:48958). Jan 30 14:01:43.188287 sshd[1740]: Accepted publickey for core from 147.75.109.163 port 48958 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:43.194612 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:43.233120 systemd-logind[1562]: New session 4 of user core. Jan 30 14:01:43.238883 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:01:43.337050 sshd[1740]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:43.350042 systemd[1]: Started sshd@4-146.190.128.120:22-147.75.109.163:48968.service - OpenSSH per-connection server daemon (147.75.109.163:48968). Jan 30 14:01:43.351086 systemd[1]: sshd@3-146.190.128.120:22-147.75.109.163:48958.service: Deactivated successfully. Jan 30 14:01:43.358681 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:01:43.361146 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:01:43.367481 systemd-logind[1562]: Removed session 4. Jan 30 14:01:43.420689 sshd[1745]: Accepted publickey for core from 147.75.109.163 port 48968 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:43.424481 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:43.437461 systemd-logind[1562]: New session 5 of user core. Jan 30 14:01:43.454142 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:01:43.526768 sshd[1745]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:43.541597 systemd[1]: Started sshd@5-146.190.128.120:22-147.75.109.163:48976.service - OpenSSH per-connection server daemon (147.75.109.163:48976). Jan 30 14:01:43.544042 systemd[1]: sshd@4-146.190.128.120:22-147.75.109.163:48968.service: Deactivated successfully. Jan 30 14:01:43.550302 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:01:43.564315 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:01:43.570081 systemd-logind[1562]: Removed session 5. Jan 30 14:01:43.619270 sshd[1753]: Accepted publickey for core from 147.75.109.163 port 48976 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:43.622983 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:43.634345 systemd-logind[1562]: New session 6 of user core. Jan 30 14:01:43.640376 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:01:43.722279 sshd[1753]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:43.732634 systemd[1]: Started sshd@6-146.190.128.120:22-147.75.109.163:48984.service - OpenSSH per-connection server daemon (147.75.109.163:48984). Jan 30 14:01:43.733475 systemd[1]: sshd@5-146.190.128.120:22-147.75.109.163:48976.service: Deactivated successfully. Jan 30 14:01:43.746070 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:01:43.747102 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:01:43.752348 systemd-logind[1562]: Removed session 6. Jan 30 14:01:43.805268 sshd[1761]: Accepted publickey for core from 147.75.109.163 port 48984 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:43.809402 sshd[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:43.821321 systemd-logind[1562]: New session 7 of user core. Jan 30 14:01:43.827910 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:01:43.920858 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:01:43.922208 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:01:44.912778 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:01:44.930136 (dockerd)[1784]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:01:45.380069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:01:45.396622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:45.795636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:45.803573 (kubelet)[1800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:01:45.930808 kubelet[1800]: E0130 14:01:45.930719 1800 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:01:45.938242 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:01:45.938606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:01:45.967889 dockerd[1784]: time="2025-01-30T14:01:45.967723265Z" level=info msg="Starting up" Jan 30 14:01:46.327059 dockerd[1784]: time="2025-01-30T14:01:46.324468164Z" level=info msg="Loading containers: start." Jan 30 14:01:46.729996 kernel: Initializing XFRM netlink socket Jan 30 14:01:46.974399 systemd-networkd[1226]: docker0: Link UP Jan 30 14:01:47.026403 dockerd[1784]: time="2025-01-30T14:01:47.024591062Z" level=info msg="Loading containers: done." Jan 30 14:01:47.093705 dockerd[1784]: time="2025-01-30T14:01:47.093580552Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:01:47.094167 dockerd[1784]: time="2025-01-30T14:01:47.093853350Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:01:47.094167 dockerd[1784]: time="2025-01-30T14:01:47.094153152Z" level=info msg="Daemon has completed initialization" Jan 30 14:01:47.245989 dockerd[1784]: time="2025-01-30T14:01:47.245182281Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:01:47.246467 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:01:48.961226 containerd[1584]: time="2025-01-30T14:01:48.961154602Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 14:01:50.221755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4093864883.mount: Deactivated successfully. Jan 30 14:01:53.261060 containerd[1584]: time="2025-01-30T14:01:53.260907996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:53.264671 containerd[1584]: time="2025-01-30T14:01:53.262140849Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 14:01:53.265616 containerd[1584]: time="2025-01-30T14:01:53.265503218Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:53.276760 containerd[1584]: time="2025-01-30T14:01:53.276644375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:53.291625 containerd[1584]: time="2025-01-30T14:01:53.286097692Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 4.32485713s" Jan 30 14:01:53.291625 containerd[1584]: time="2025-01-30T14:01:53.286210768Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 14:01:53.373355 containerd[1584]: time="2025-01-30T14:01:53.373296469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 14:01:56.078578 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:01:56.100583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:56.611434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:56.633114 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:01:56.951922 kubelet[2030]: E0130 14:01:56.950504 2030 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:01:56.958186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:01:56.958507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:01:57.675309 containerd[1584]: time="2025-01-30T14:01:57.674394900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:57.684774 containerd[1584]: time="2025-01-30T14:01:57.683800078Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 14:01:57.686568 containerd[1584]: time="2025-01-30T14:01:57.686487073Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:57.696753 containerd[1584]: time="2025-01-30T14:01:57.696556003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:57.707776 containerd[1584]: time="2025-01-30T14:01:57.706876432Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 4.333257869s" Jan 30 14:01:57.707776 containerd[1584]: time="2025-01-30T14:01:57.707015161Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 14:01:57.796182 containerd[1584]: time="2025-01-30T14:01:57.796085407Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 14:01:57.799014 systemd-resolved[1478]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 14:01:59.891168 containerd[1584]: time="2025-01-30T14:01:59.890933901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:59.895699 containerd[1584]: time="2025-01-30T14:01:59.895596869Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 14:01:59.898556 containerd[1584]: time="2025-01-30T14:01:59.898363268Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:59.906396 containerd[1584]: time="2025-01-30T14:01:59.906257917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:59.912476 containerd[1584]: time="2025-01-30T14:01:59.910839957Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 2.114664361s" Jan 30 14:01:59.912476 containerd[1584]: time="2025-01-30T14:01:59.910930008Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 14:02:00.010244 containerd[1584]: time="2025-01-30T14:02:00.009551374Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 14:02:00.872281 systemd-resolved[1478]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 14:02:02.114399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3249786015.mount: Deactivated successfully. Jan 30 14:02:03.312599 containerd[1584]: time="2025-01-30T14:02:03.312481553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:03.315747 containerd[1584]: time="2025-01-30T14:02:03.315603634Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 14:02:03.319310 containerd[1584]: time="2025-01-30T14:02:03.319090180Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:03.324193 containerd[1584]: time="2025-01-30T14:02:03.324062037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:03.327438 containerd[1584]: time="2025-01-30T14:02:03.327259081Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 3.317636099s" Jan 30 14:02:03.327438 containerd[1584]: time="2025-01-30T14:02:03.327379562Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 14:02:03.376325 containerd[1584]: time="2025-01-30T14:02:03.375516217Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:02:04.110086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977013115.mount: Deactivated successfully. Jan 30 14:02:04.120236 systemd-resolved[1478]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 30 14:02:06.117001 containerd[1584]: time="2025-01-30T14:02:06.115343230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:06.118623 containerd[1584]: time="2025-01-30T14:02:06.118525930Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 14:02:06.120301 containerd[1584]: time="2025-01-30T14:02:06.119984466Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:06.128983 containerd[1584]: time="2025-01-30T14:02:06.128846845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:06.136071 containerd[1584]: time="2025-01-30T14:02:06.133840118Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.758262013s" Jan 30 14:02:06.136071 containerd[1584]: time="2025-01-30T14:02:06.133927693Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 14:02:06.214323 containerd[1584]: time="2025-01-30T14:02:06.214239550Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 14:02:06.730363 systemd[1]: Started sshd@7-146.190.128.120:22-218.92.0.203:62202.service - OpenSSH per-connection server daemon (218.92.0.203:62202). Jan 30 14:02:06.890542 sshd[2115]: Unable to negotiate with 218.92.0.203 port 62202: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Jan 30 14:02:06.893198 systemd[1]: sshd@7-146.190.128.120:22-218.92.0.203:62202.service: Deactivated successfully. Jan 30 14:02:06.906589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819785871.mount: Deactivated successfully. Jan 30 14:02:06.932248 containerd[1584]: time="2025-01-30T14:02:06.926800614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:06.932835 containerd[1584]: time="2025-01-30T14:02:06.932702111Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 14:02:06.936497 containerd[1584]: time="2025-01-30T14:02:06.934751716Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:06.957592 containerd[1584]: time="2025-01-30T14:02:06.954625949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:06.957592 containerd[1584]: time="2025-01-30T14:02:06.956290790Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 741.981827ms" Jan 30 14:02:06.957592 containerd[1584]: time="2025-01-30T14:02:06.956351749Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 14:02:07.033024 containerd[1584]: time="2025-01-30T14:02:07.031978121Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 14:02:07.078102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 14:02:07.092589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:07.425364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:07.437937 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:02:07.575997 kubelet[2140]: E0130 14:02:07.575858 2140 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:02:07.582041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:02:07.582610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:02:07.910080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount442718209.mount: Deactivated successfully. Jan 30 14:02:12.187780 containerd[1584]: time="2025-01-30T14:02:12.187668956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:12.193292 containerd[1584]: time="2025-01-30T14:02:12.193189977Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 14:02:12.197112 containerd[1584]: time="2025-01-30T14:02:12.197021165Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:12.209470 containerd[1584]: time="2025-01-30T14:02:12.209348580Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.177306817s" Jan 30 14:02:12.210104 containerd[1584]: time="2025-01-30T14:02:12.209832265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 14:02:12.214196 containerd[1584]: time="2025-01-30T14:02:12.211458976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:15.124144 update_engine[1565]: I20250130 14:02:15.122596 1565 update_attempter.cc:509] Updating boot flags... Jan 30 14:02:15.268988 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2264) Jan 30 14:02:15.355988 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2267) Jan 30 14:02:17.827601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 14:02:17.901320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:18.174293 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:02:18.174519 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:02:18.177270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:18.199531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:18.255394 systemd[1]: Reloading requested from client PID 2284 ('systemctl') (unit session-7.scope)... Jan 30 14:02:18.256263 systemd[1]: Reloading... Jan 30 14:02:18.527253 zram_generator::config[2329]: No configuration found. Jan 30 14:02:18.808889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:02:18.961827 systemd[1]: Reloading finished in 704 ms. Jan 30 14:02:19.052342 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:02:19.052772 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:02:19.053672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:19.067530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:19.346349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:19.364337 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:02:19.523276 kubelet[2384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:02:19.523276 kubelet[2384]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:02:19.523276 kubelet[2384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:02:19.528899 kubelet[2384]: I0130 14:02:19.525500 2384 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:02:20.557797 kubelet[2384]: I0130 14:02:20.557023 2384 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:02:20.557797 kubelet[2384]: I0130 14:02:20.557075 2384 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:02:20.557797 kubelet[2384]: I0130 14:02:20.557512 2384 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:02:20.609634 kubelet[2384]: I0130 14:02:20.609583 2384 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:02:20.610478 kubelet[2384]: E0130 14:02:20.610428 2384 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.128.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:20.637548 kubelet[2384]: I0130 14:02:20.637486 2384 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:02:20.639361 kubelet[2384]: I0130 14:02:20.639174 2384 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:02:20.639719 kubelet[2384]: I0130 14:02:20.639312 2384 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-b-f874540adc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:02:20.640756 kubelet[2384]: I0130 14:02:20.640663 2384 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:02:20.640756 kubelet[2384]: I0130 14:02:20.640708 2384 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:02:20.641027 kubelet[2384]: I0130 14:02:20.640918 2384 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:02:20.648125 kubelet[2384]: I0130 14:02:20.647557 2384 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:02:20.648125 kubelet[2384]: I0130 14:02:20.647626 2384 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:02:20.648125 kubelet[2384]: I0130 14:02:20.647674 2384 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:02:20.648125 kubelet[2384]: I0130 14:02:20.647711 2384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:02:20.658522 kubelet[2384]: W0130 14:02:20.657572 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.128.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-b-f874540adc&limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:20.658522 kubelet[2384]: E0130 14:02:20.657712 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.128.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-b-f874540adc&limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:20.658522 kubelet[2384]: W0130 14:02:20.657888 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.128.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:20.658522 kubelet[2384]: E0130 14:02:20.657935 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.128.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:20.659994 kubelet[2384]: I0130 14:02:20.659392 2384 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:02:20.662910 kubelet[2384]: I0130 14:02:20.662860 2384 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:02:20.664424 kubelet[2384]: W0130 14:02:20.663207 2384 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:02:20.669557 kubelet[2384]: I0130 14:02:20.669511 2384 server.go:1264] "Started kubelet" Jan 30 14:02:20.674174 kubelet[2384]: I0130 14:02:20.674088 2384 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:02:20.681063 kubelet[2384]: I0130 14:02:20.680910 2384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:02:20.681984 kubelet[2384]: I0130 14:02:20.681687 2384 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:02:20.682862 kubelet[2384]: I0130 14:02:20.682825 2384 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:02:20.683779 kubelet[2384]: I0130 14:02:20.683558 2384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:02:20.686543 kubelet[2384]: E0130 14:02:20.686256 2384 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.128.120:6443/api/v1/namespaces/default/events\": dial tcp 146.190.128.120:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-b-f874540adc.181f7d4aa0f01efe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-b-f874540adc,UID:ci-4081.3.0-b-f874540adc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-b-f874540adc,},FirstTimestamp:2025-01-30 14:02:20.669443838 +0000 UTC m=+1.297668472,LastTimestamp:2025-01-30 14:02:20.669443838 +0000 UTC m=+1.297668472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-b-f874540adc,}" Jan 30 14:02:20.698896 kubelet[2384]: I0130 14:02:20.697279 2384 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:02:20.698896 kubelet[2384]: I0130 14:02:20.698163 2384 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:02:20.698896 kubelet[2384]: I0130 14:02:20.698279 2384 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:02:20.698896 kubelet[2384]: E0130 14:02:20.698783 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.128.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-b-f874540adc?timeout=10s\": dial tcp 146.190.128.120:6443: connect: connection refused" interval="200ms" Jan 30 14:02:20.698896 kubelet[2384]: W0130 14:02:20.698779 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.128.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:20.698896 kubelet[2384]: E0130 14:02:20.698850 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.128.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:20.701564 kubelet[2384]: I0130 14:02:20.701263 2384 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:02:20.702111 kubelet[2384]: I0130 14:02:20.701878 2384 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:02:20.704422 kubelet[2384]: E0130 14:02:20.704363 2384 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:02:20.705642 kubelet[2384]: I0130 14:02:20.705595 2384 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:02:20.772843 kubelet[2384]: I0130 14:02:20.772481 2384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:02:20.785474 kubelet[2384]: I0130 14:02:20.785425 2384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:02:20.785812 kubelet[2384]: I0130 14:02:20.785794 2384 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:02:20.785972 kubelet[2384]: I0130 14:02:20.785931 2384 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:02:20.786621 kubelet[2384]: E0130 14:02:20.786175 2384 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:02:20.793637 kubelet[2384]: W0130 14:02:20.793341 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.128.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:20.793637 kubelet[2384]: E0130 14:02:20.793471 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.128.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:20.801981 kubelet[2384]: I0130 14:02:20.801631 2384 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:20.804259 kubelet[2384]: E0130 14:02:20.804185 2384 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.128.120:6443/api/v1/nodes\": dial tcp 146.190.128.120:6443: connect: connection refused" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:20.811389 kubelet[2384]: I0130 14:02:20.808999 2384 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:02:20.811389 kubelet[2384]: I0130 14:02:20.809023 2384 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:02:20.813010 kubelet[2384]: I0130 14:02:20.812112 2384 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:02:20.887327 kubelet[2384]: I0130 14:02:20.887038 2384 policy_none.go:49] "None policy: Start" Jan 30 14:02:20.887327 kubelet[2384]: E0130 14:02:20.887311 2384 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 14:02:20.901231 kubelet[2384]: E0130 14:02:20.899626 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.128.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-b-f874540adc?timeout=10s\": dial tcp 146.190.128.120:6443: connect: connection refused" interval="400ms" Jan 30 14:02:20.902418 kubelet[2384]: I0130 14:02:20.902060 2384 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:02:20.903383 kubelet[2384]: I0130 14:02:20.903241 2384 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:02:21.006757 kubelet[2384]: I0130 14:02:21.005710 2384 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.008234 kubelet[2384]: E0130 14:02:21.008143 2384 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.128.120:6443/api/v1/nodes\": dial tcp 146.190.128.120:6443: connect: connection refused" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.020840 kubelet[2384]: I0130 14:02:21.020773 2384 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:02:21.021416 kubelet[2384]: I0130 14:02:21.021193 2384 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:02:21.021416 kubelet[2384]: I0130 14:02:21.021399 2384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:02:21.029341 kubelet[2384]: E0130 14:02:21.029169 2384 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-b-f874540adc\" not found" Jan 30 14:02:21.088713 kubelet[2384]: I0130 14:02:21.087676 2384 topology_manager.go:215] "Topology Admit Handler" podUID="6852380f4440a77acdaac1570916d158" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.090242 kubelet[2384]: I0130 14:02:21.090177 2384 topology_manager.go:215] "Topology Admit Handler" podUID="b81f99f91a3d6b438f99961a73b66b52" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.100830 kubelet[2384]: I0130 14:02:21.092673 2384 topology_manager.go:215] "Topology Admit Handler" podUID="e701dfd166d946bc858cdad50c4a634c" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.104401 kubelet[2384]: I0130 14:02:21.104327 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e701dfd166d946bc858cdad50c4a634c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-b-f874540adc\" (UID: \"e701dfd166d946bc858cdad50c4a634c\") " pod="kube-system/kube-apiserver-ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.104740 kubelet[2384]: I0130 14:02:21.104707 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6852380f4440a77acdaac1570916d158-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-b-f874540adc\" (UID: \"6852380f4440a77acdaac1570916d158\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.106087 kubelet[2384]: I0130 14:02:21.104830 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6852380f4440a77acdaac1570916d158-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-b-f874540adc\" (UID: \"6852380f4440a77acdaac1570916d158\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.106087 kubelet[2384]: I0130 14:02:21.104864 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6852380f4440a77acdaac1570916d158-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-b-f874540adc\" (UID: \"6852380f4440a77acdaac1570916d158\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.106087 kubelet[2384]: I0130 14:02:21.104890 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6852380f4440a77acdaac1570916d158-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-b-f874540adc\" (UID: \"6852380f4440a77acdaac1570916d158\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.106087 kubelet[2384]: I0130 14:02:21.104931 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e701dfd166d946bc858cdad50c4a634c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-b-f874540adc\" (UID: \"e701dfd166d946bc858cdad50c4a634c\") " pod="kube-system/kube-apiserver-ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.106087 kubelet[2384]: I0130 14:02:21.105022 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6852380f4440a77acdaac1570916d158-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-b-f874540adc\" (UID: \"6852380f4440a77acdaac1570916d158\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.106411 kubelet[2384]: I0130 14:02:21.105056 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b81f99f91a3d6b438f99961a73b66b52-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-b-f874540adc\" (UID: \"b81f99f91a3d6b438f99961a73b66b52\") " pod="kube-system/kube-scheduler-ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.106411 kubelet[2384]: I0130 14:02:21.105136 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e701dfd166d946bc858cdad50c4a634c-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-b-f874540adc\" (UID: \"e701dfd166d946bc858cdad50c4a634c\") " pod="kube-system/kube-apiserver-ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.303911 kubelet[2384]: E0130 14:02:21.303802 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.128.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-b-f874540adc?timeout=10s\": dial tcp 146.190.128.120:6443: connect: connection refused" interval="800ms" Jan 30 14:02:21.407726 kubelet[2384]: E0130 14:02:21.407343 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:21.410014 containerd[1584]: time="2025-01-30T14:02:21.409892650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-b-f874540adc,Uid:6852380f4440a77acdaac1570916d158,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:21.410812 kubelet[2384]: I0130 14:02:21.410419 2384 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.411489 kubelet[2384]: E0130 14:02:21.411026 2384 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.128.120:6443/api/v1/nodes\": dial tcp 146.190.128.120:6443: connect: connection refused" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:21.421245 kubelet[2384]: E0130 14:02:21.420691 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:21.425780 containerd[1584]: time="2025-01-30T14:02:21.425709426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-b-f874540adc,Uid:b81f99f91a3d6b438f99961a73b66b52,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:21.430962 kubelet[2384]: E0130 14:02:21.430720 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:21.432654 containerd[1584]: time="2025-01-30T14:02:21.431829782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-b-f874540adc,Uid:e701dfd166d946bc858cdad50c4a634c,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:22.004393 kubelet[2384]: E0130 14:02:22.003090 2384 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.128.120:6443/api/v1/namespaces/default/events\": dial tcp 146.190.128.120:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-b-f874540adc.181f7d4aa0f01efe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-b-f874540adc,UID:ci-4081.3.0-b-f874540adc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-b-f874540adc,},FirstTimestamp:2025-01-30 14:02:20.669443838 +0000 UTC m=+1.297668472,LastTimestamp:2025-01-30 14:02:20.669443838 +0000 UTC m=+1.297668472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-b-f874540adc,}" Jan 30 14:02:22.094012 kubelet[2384]: W0130 14:02:22.093845 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.128.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:22.094534 kubelet[2384]: E0130 14:02:22.094481 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.128.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:22.105009 kubelet[2384]: E0130 14:02:22.104760 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.128.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-b-f874540adc?timeout=10s\": dial tcp 146.190.128.120:6443: connect: connection refused" interval="1.6s" Jan 30 14:02:22.174514 kubelet[2384]: W0130 14:02:22.174402 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.128.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:22.174799 kubelet[2384]: E0130 14:02:22.174591 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.128.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:22.181281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976240214.mount: Deactivated successfully. Jan 30 14:02:22.197904 containerd[1584]: time="2025-01-30T14:02:22.197804978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:02:22.211361 containerd[1584]: time="2025-01-30T14:02:22.211172343Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 14:02:22.214351 kubelet[2384]: I0130 14:02:22.213685 2384 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:22.214351 kubelet[2384]: E0130 14:02:22.214277 2384 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.128.120:6443/api/v1/nodes\": dial tcp 146.190.128.120:6443: connect: connection refused" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:22.215216 containerd[1584]: time="2025-01-30T14:02:22.215153905Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:02:22.226989 containerd[1584]: time="2025-01-30T14:02:22.225224299Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:02:22.227668 containerd[1584]: time="2025-01-30T14:02:22.227615451Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:02:22.231309 containerd[1584]: time="2025-01-30T14:02:22.231229866Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:02:22.237362 containerd[1584]: time="2025-01-30T14:02:22.237235442Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:02:22.242585 containerd[1584]: time="2025-01-30T14:02:22.242487648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:02:22.245518 containerd[1584]: time="2025-01-30T14:02:22.245414698Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 819.352428ms" Jan 30 14:02:22.251643 containerd[1584]: time="2025-01-30T14:02:22.251547831Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 819.564038ms" Jan 30 14:02:22.251962 containerd[1584]: time="2025-01-30T14:02:22.251887549Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 840.76739ms" Jan 30 14:02:22.253964 kubelet[2384]: W0130 14:02:22.253767 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.128.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-b-f874540adc&limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:22.253964 kubelet[2384]: E0130 14:02:22.253892 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.128.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-b-f874540adc&limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:22.266048 kubelet[2384]: W0130 14:02:22.265493 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.128.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:22.266048 kubelet[2384]: E0130 14:02:22.265640 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.128.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:22.614315 containerd[1584]: time="2025-01-30T14:02:22.613975876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:22.614315 containerd[1584]: time="2025-01-30T14:02:22.614114859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:22.614315 containerd[1584]: time="2025-01-30T14:02:22.614153473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:22.615817 containerd[1584]: time="2025-01-30T14:02:22.614592315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:22.626631 kubelet[2384]: E0130 14:02:22.626267 2384 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.128.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.128.120:6443: connect: connection refused Jan 30 14:02:22.637897 containerd[1584]: time="2025-01-30T14:02:22.637395173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:22.637897 containerd[1584]: time="2025-01-30T14:02:22.637513477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:22.637897 containerd[1584]: time="2025-01-30T14:02:22.637543331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:22.637897 containerd[1584]: time="2025-01-30T14:02:22.637702155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:22.665405 containerd[1584]: time="2025-01-30T14:02:22.663547450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:22.665405 containerd[1584]: time="2025-01-30T14:02:22.664194375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:22.665405 containerd[1584]: time="2025-01-30T14:02:22.664319604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:22.665405 containerd[1584]: time="2025-01-30T14:02:22.664665946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:22.857475 containerd[1584]: time="2025-01-30T14:02:22.857259217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-b-f874540adc,Uid:b81f99f91a3d6b438f99961a73b66b52,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae2a190c5dc7db8b2ae044f690cd2db89cc73775c3ef02ddc61dc2de73043e15\"" Jan 30 14:02:22.882693 kubelet[2384]: E0130 14:02:22.881428 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:22.888619 containerd[1584]: time="2025-01-30T14:02:22.888387236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-b-f874540adc,Uid:6852380f4440a77acdaac1570916d158,Namespace:kube-system,Attempt:0,} returns sandbox id \"9edb9f4ba2aa6426845025350133be5ab5e7225c0ac542f51739e2fb49f2d163\"" Jan 30 14:02:22.891668 kubelet[2384]: E0130 14:02:22.891619 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:22.894364 containerd[1584]: time="2025-01-30T14:02:22.894269948Z" level=info msg="CreateContainer within sandbox \"ae2a190c5dc7db8b2ae044f690cd2db89cc73775c3ef02ddc61dc2de73043e15\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:02:22.896374 containerd[1584]: time="2025-01-30T14:02:22.896315551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-b-f874540adc,Uid:e701dfd166d946bc858cdad50c4a634c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cb5a49733eae5bf79b9d384f3d6bfc6ef4be820be0f530f8b8dd09efb3a9fbf\"" Jan 30 14:02:22.897414 kubelet[2384]: E0130 14:02:22.897375 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:22.914763 containerd[1584]: time="2025-01-30T14:02:22.914669217Z" level=info msg="CreateContainer within sandbox \"9edb9f4ba2aa6426845025350133be5ab5e7225c0ac542f51739e2fb49f2d163\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:02:22.926144 containerd[1584]: time="2025-01-30T14:02:22.926070027Z" level=info msg="CreateContainer within sandbox \"5cb5a49733eae5bf79b9d384f3d6bfc6ef4be820be0f530f8b8dd09efb3a9fbf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:02:22.968920 containerd[1584]: time="2025-01-30T14:02:22.968562983Z" level=info msg="CreateContainer within sandbox \"9edb9f4ba2aa6426845025350133be5ab5e7225c0ac542f51739e2fb49f2d163\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a853a4a76384395aef99722ca5acf6acce0d1a18301319efffc40031eca4d781\"" Jan 30 14:02:22.971357 containerd[1584]: time="2025-01-30T14:02:22.970689240Z" level=info msg="StartContainer for \"a853a4a76384395aef99722ca5acf6acce0d1a18301319efffc40031eca4d781\"" Jan 30 14:02:22.987653 containerd[1584]: time="2025-01-30T14:02:22.987387500Z" level=info msg="CreateContainer within sandbox \"ae2a190c5dc7db8b2ae044f690cd2db89cc73775c3ef02ddc61dc2de73043e15\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b96bbc7329246ebdff83a3fec285efae6c59da9bc3833ce9b6f477fc7571d338\"" Jan 30 14:02:22.989088 containerd[1584]: time="2025-01-30T14:02:22.989037092Z" level=info msg="StartContainer for \"b96bbc7329246ebdff83a3fec285efae6c59da9bc3833ce9b6f477fc7571d338\"" Jan 30 14:02:23.004811 containerd[1584]: time="2025-01-30T14:02:23.004718433Z" level=info msg="CreateContainer within sandbox \"5cb5a49733eae5bf79b9d384f3d6bfc6ef4be820be0f530f8b8dd09efb3a9fbf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"99a0ee38757ef24c929cae8e3c1ca30edfbb824f4b435a46b3e03134a0bb27a2\"" Jan 30 14:02:23.009996 containerd[1584]: time="2025-01-30T14:02:23.008083075Z" level=info msg="StartContainer for \"99a0ee38757ef24c929cae8e3c1ca30edfbb824f4b435a46b3e03134a0bb27a2\"" Jan 30 14:02:23.322496 containerd[1584]: time="2025-01-30T14:02:23.322238862Z" level=info msg="StartContainer for \"a853a4a76384395aef99722ca5acf6acce0d1a18301319efffc40031eca4d781\" returns successfully" Jan 30 14:02:23.326964 containerd[1584]: time="2025-01-30T14:02:23.326695277Z" level=info msg="StartContainer for \"99a0ee38757ef24c929cae8e3c1ca30edfbb824f4b435a46b3e03134a0bb27a2\" returns successfully" Jan 30 14:02:23.342105 containerd[1584]: time="2025-01-30T14:02:23.340456066Z" level=info msg="StartContainer for \"b96bbc7329246ebdff83a3fec285efae6c59da9bc3833ce9b6f477fc7571d338\" returns successfully" Jan 30 14:02:23.706146 kubelet[2384]: E0130 14:02:23.706060 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.128.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-b-f874540adc?timeout=10s\": dial tcp 146.190.128.120:6443: connect: connection refused" interval="3.2s" Jan 30 14:02:23.816699 kubelet[2384]: I0130 14:02:23.816630 2384 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:23.830368 kubelet[2384]: E0130 14:02:23.830120 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:23.841030 kubelet[2384]: E0130 14:02:23.840713 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:23.863846 kubelet[2384]: E0130 14:02:23.863765 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:24.872499 kubelet[2384]: E0130 14:02:24.872238 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:24.872499 kubelet[2384]: E0130 14:02:24.872316 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:26.940986 kubelet[2384]: I0130 14:02:26.939331 2384 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:27.663423 kubelet[2384]: I0130 14:02:27.662257 2384 apiserver.go:52] "Watching apiserver" Jan 30 14:02:27.699492 kubelet[2384]: I0130 14:02:27.699360 2384 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:02:29.861331 systemd[1]: Reloading requested from client PID 2657 ('systemctl') (unit session-7.scope)... Jan 30 14:02:29.862350 systemd[1]: Reloading... Jan 30 14:02:30.056194 zram_generator::config[2699]: No configuration found. Jan 30 14:02:30.430070 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:02:30.580354 systemd[1]: Reloading finished in 717 ms. Jan 30 14:02:30.640338 kubelet[2384]: I0130 14:02:30.640253 2384 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:02:30.642465 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:30.659028 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:02:30.659718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:30.673138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:31.018207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:31.052161 (kubelet)[2757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:02:31.253126 kubelet[2757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:02:31.256981 kubelet[2757]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:02:31.256981 kubelet[2757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:02:31.256981 kubelet[2757]: I0130 14:02:31.254954 2757 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:02:31.270289 kubelet[2757]: I0130 14:02:31.269153 2757 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:02:31.270289 kubelet[2757]: I0130 14:02:31.269207 2757 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:02:31.272665 kubelet[2757]: I0130 14:02:31.271200 2757 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:02:31.275088 kubelet[2757]: I0130 14:02:31.274482 2757 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:02:31.280927 kubelet[2757]: I0130 14:02:31.280851 2757 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:02:31.304263 kubelet[2757]: I0130 14:02:31.302609 2757 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:02:31.304263 kubelet[2757]: I0130 14:02:31.303481 2757 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:02:31.304263 kubelet[2757]: I0130 14:02:31.303552 2757 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-b-f874540adc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:02:31.304263 kubelet[2757]: I0130 14:02:31.303975 2757 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:02:31.304663 kubelet[2757]: I0130 14:02:31.304000 2757 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:02:31.304663 kubelet[2757]: I0130 14:02:31.304089 2757 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:02:31.304663 kubelet[2757]: I0130 14:02:31.304503 2757 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:02:31.319038 kubelet[2757]: I0130 14:02:31.305904 2757 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:02:31.319038 kubelet[2757]: I0130 14:02:31.306064 2757 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:02:31.319038 kubelet[2757]: I0130 14:02:31.308923 2757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:02:31.319038 kubelet[2757]: I0130 14:02:31.315664 2757 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:02:31.319038 kubelet[2757]: I0130 14:02:31.316067 2757 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:02:31.319038 kubelet[2757]: I0130 14:02:31.316808 2757 server.go:1264] "Started kubelet" Jan 30 14:02:31.324365 kubelet[2757]: I0130 14:02:31.324246 2757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:02:31.334974 kubelet[2757]: I0130 14:02:31.334850 2757 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:02:31.339333 kubelet[2757]: I0130 14:02:31.339239 2757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:02:31.348674 kubelet[2757]: I0130 14:02:31.348582 2757 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:02:31.374453 kubelet[2757]: I0130 14:02:31.374332 2757 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:02:31.380823 kubelet[2757]: I0130 14:02:31.380602 2757 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:02:31.381996 kubelet[2757]: I0130 14:02:31.381328 2757 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:02:31.383142 kubelet[2757]: I0130 14:02:31.383114 2757 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:02:31.399616 kubelet[2757]: I0130 14:02:31.398921 2757 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:02:31.401824 kubelet[2757]: I0130 14:02:31.400084 2757 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:02:31.414927 kubelet[2757]: I0130 14:02:31.414703 2757 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:02:31.426807 kubelet[2757]: I0130 14:02:31.426517 2757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:02:31.431757 kubelet[2757]: I0130 14:02:31.431144 2757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:02:31.431757 kubelet[2757]: I0130 14:02:31.431213 2757 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:02:31.431757 kubelet[2757]: I0130 14:02:31.431264 2757 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:02:31.431757 kubelet[2757]: E0130 14:02:31.431351 2757 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:02:31.503084 kubelet[2757]: I0130 14:02:31.490369 2757 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.538104 kubelet[2757]: E0130 14:02:31.531699 2757 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 14:02:31.545321 kubelet[2757]: I0130 14:02:31.545260 2757 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.547096 kubelet[2757]: I0130 14:02:31.546858 2757 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.689227 kubelet[2757]: I0130 14:02:31.687679 2757 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:02:31.689227 kubelet[2757]: I0130 14:02:31.687708 2757 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:02:31.689227 kubelet[2757]: I0130 14:02:31.687751 2757 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:02:31.689227 kubelet[2757]: I0130 14:02:31.688072 2757 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:02:31.689227 kubelet[2757]: I0130 14:02:31.688090 2757 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:02:31.689227 kubelet[2757]: I0130 14:02:31.688122 2757 policy_none.go:49] "None policy: Start" Jan 30 14:02:31.694077 kubelet[2757]: I0130 14:02:31.693350 2757 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:02:31.694077 kubelet[2757]: I0130 14:02:31.693434 2757 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:02:31.694368 kubelet[2757]: I0130 14:02:31.694337 2757 state_mem.go:75] "Updated machine memory state" Jan 30 14:02:31.699277 kubelet[2757]: I0130 14:02:31.697479 2757 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:02:31.703509 kubelet[2757]: I0130 14:02:31.702207 2757 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:02:31.703509 kubelet[2757]: I0130 14:02:31.702740 2757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:02:31.732346 kubelet[2757]: I0130 14:02:31.732138 2757 topology_manager.go:215] "Topology Admit Handler" podUID="e701dfd166d946bc858cdad50c4a634c" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.732572 kubelet[2757]: I0130 14:02:31.732388 2757 topology_manager.go:215] "Topology Admit Handler" podUID="6852380f4440a77acdaac1570916d158" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.732572 kubelet[2757]: I0130 14:02:31.732449 2757 topology_manager.go:215] "Topology Admit Handler" podUID="b81f99f91a3d6b438f99961a73b66b52" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.778993 kubelet[2757]: W0130 14:02:31.773779 2757 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:02:31.784042 kubelet[2757]: W0130 14:02:31.781326 2757 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:02:31.786081 kubelet[2757]: W0130 14:02:31.785814 2757 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:02:31.791384 kubelet[2757]: I0130 14:02:31.791232 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e701dfd166d946bc858cdad50c4a634c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-b-f874540adc\" (UID: \"e701dfd166d946bc858cdad50c4a634c\") " pod="kube-system/kube-apiserver-ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.792919 kubelet[2757]: I0130 14:02:31.791588 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6852380f4440a77acdaac1570916d158-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-b-f874540adc\" (UID: \"6852380f4440a77acdaac1570916d158\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.792919 kubelet[2757]: I0130 14:02:31.791638 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6852380f4440a77acdaac1570916d158-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-b-f874540adc\" (UID: \"6852380f4440a77acdaac1570916d158\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.792919 kubelet[2757]: I0130 14:02:31.791669 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b81f99f91a3d6b438f99961a73b66b52-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-b-f874540adc\" (UID: \"b81f99f91a3d6b438f99961a73b66b52\") " pod="kube-system/kube-scheduler-ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.792919 kubelet[2757]: I0130 14:02:31.792015 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e701dfd166d946bc858cdad50c4a634c-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-b-f874540adc\" (UID: \"e701dfd166d946bc858cdad50c4a634c\") " pod="kube-system/kube-apiserver-ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.792919 kubelet[2757]: I0130 14:02:31.792091 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e701dfd166d946bc858cdad50c4a634c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-b-f874540adc\" (UID: \"e701dfd166d946bc858cdad50c4a634c\") " pod="kube-system/kube-apiserver-ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.793253 kubelet[2757]: I0130 14:02:31.792127 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6852380f4440a77acdaac1570916d158-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-b-f874540adc\" (UID: \"6852380f4440a77acdaac1570916d158\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.793253 kubelet[2757]: I0130 14:02:31.792162 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6852380f4440a77acdaac1570916d158-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-b-f874540adc\" (UID: \"6852380f4440a77acdaac1570916d158\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-f874540adc" Jan 30 14:02:31.793253 kubelet[2757]: I0130 14:02:31.792205 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6852380f4440a77acdaac1570916d158-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-b-f874540adc\" (UID: \"6852380f4440a77acdaac1570916d158\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-f874540adc" Jan 30 14:02:32.079540 kubelet[2757]: E0130 14:02:32.079445 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:32.090455 kubelet[2757]: E0130 14:02:32.082604 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:32.090919 kubelet[2757]: E0130 14:02:32.090053 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:32.311192 kubelet[2757]: I0130 14:02:32.311119 2757 apiserver.go:52] "Watching apiserver" Jan 30 14:02:32.390730 kubelet[2757]: I0130 14:02:32.390264 2757 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:02:32.580181 kubelet[2757]: E0130 14:02:32.577846 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:32.580181 kubelet[2757]: E0130 14:02:32.578125 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:32.580181 kubelet[2757]: E0130 14:02:32.579279 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:32.682050 kubelet[2757]: I0130 14:02:32.681715 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-b-f874540adc" podStartSLOduration=1.6816469939999998 podStartE2EDuration="1.681646994s" podCreationTimestamp="2025-01-30 14:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:02:32.643089679 +0000 UTC m=+1.572045028" watchObservedRunningTime="2025-01-30 14:02:32.681646994 +0000 UTC m=+1.610602355" Jan 30 14:02:32.708223 kubelet[2757]: I0130 14:02:32.708066 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-b-f874540adc" podStartSLOduration=1.7080396599999998 podStartE2EDuration="1.70803966s" podCreationTimestamp="2025-01-30 14:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:02:32.681364043 +0000 UTC m=+1.610319389" watchObservedRunningTime="2025-01-30 14:02:32.70803966 +0000 UTC m=+1.636995011" Jan 30 14:02:32.740269 kubelet[2757]: I0130 14:02:32.739231 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-b-f874540adc" podStartSLOduration=1.739195998 podStartE2EDuration="1.739195998s" podCreationTimestamp="2025-01-30 14:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:02:32.711387855 +0000 UTC m=+1.640343215" watchObservedRunningTime="2025-01-30 14:02:32.739195998 +0000 UTC m=+1.668151360" Jan 30 14:02:33.320490 sudo[1768]: pam_unix(sudo:session): session closed for user root Jan 30 14:02:33.331223 sshd[1761]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:33.338352 systemd[1]: sshd@6-146.190.128.120:22-147.75.109.163:48984.service: Deactivated successfully. Jan 30 14:02:33.347322 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:02:33.348078 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:02:33.353037 systemd-logind[1562]: Removed session 7. Jan 30 14:02:33.582170 kubelet[2757]: E0130 14:02:33.582007 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:33.585984 kubelet[2757]: E0130 14:02:33.585617 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:34.584136 kubelet[2757]: E0130 14:02:34.583810 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:34.584800 kubelet[2757]: E0130 14:02:34.584205 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:38.626755 kubelet[2757]: E0130 14:02:38.626442 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:39.603321 kubelet[2757]: E0130 14:02:39.603022 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:42.900000 kubelet[2757]: E0130 14:02:42.899899 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:43.624013 kubelet[2757]: E0130 14:02:43.622202 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:44.378593 kubelet[2757]: I0130 14:02:44.378538 2757 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:02:44.386038 containerd[1584]: time="2025-01-30T14:02:44.384353368Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:02:44.396524 kubelet[2757]: I0130 14:02:44.394227 2757 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:02:44.567480 kubelet[2757]: I0130 14:02:44.563226 2757 topology_manager.go:215] "Topology Admit Handler" podUID="232f0e23-c33c-47ca-a52a-6facb4316346" podNamespace="kube-system" podName="kube-proxy-9rn2t" Jan 30 14:02:44.645611 kubelet[2757]: I0130 14:02:44.645347 2757 topology_manager.go:215] "Topology Admit Handler" podUID="059e1b84-8401-4e90-a3c1-cdb09110d402" podNamespace="kube-flannel" podName="kube-flannel-ds-fxrfp" Jan 30 14:02:44.664103 kubelet[2757]: W0130 14:02:44.664017 2757 reflector.go:547] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-4081.3.0-b-f874540adc" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4081.3.0-b-f874540adc' and this object Jan 30 14:02:44.664103 kubelet[2757]: E0130 14:02:44.664097 2757 reflector.go:150] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-4081.3.0-b-f874540adc" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4081.3.0-b-f874540adc' and this object Jan 30 14:02:44.664888 kubelet[2757]: W0130 14:02:44.664177 2757 reflector.go:547] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-b-f874540adc" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4081.3.0-b-f874540adc' and this object Jan 30 14:02:44.664888 kubelet[2757]: E0130 14:02:44.664192 2757 reflector.go:150] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-b-f874540adc" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4081.3.0-b-f874540adc' and this object Jan 30 14:02:44.716423 kubelet[2757]: I0130 14:02:44.714778 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/232f0e23-c33c-47ca-a52a-6facb4316346-xtables-lock\") pod \"kube-proxy-9rn2t\" (UID: \"232f0e23-c33c-47ca-a52a-6facb4316346\") " pod="kube-system/kube-proxy-9rn2t" Jan 30 14:02:44.716423 kubelet[2757]: I0130 14:02:44.714876 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcpw7\" (UniqueName: \"kubernetes.io/projected/232f0e23-c33c-47ca-a52a-6facb4316346-kube-api-access-gcpw7\") pod \"kube-proxy-9rn2t\" (UID: \"232f0e23-c33c-47ca-a52a-6facb4316346\") " pod="kube-system/kube-proxy-9rn2t" Jan 30 14:02:44.716423 kubelet[2757]: I0130 14:02:44.714920 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/232f0e23-c33c-47ca-a52a-6facb4316346-kube-proxy\") pod \"kube-proxy-9rn2t\" (UID: \"232f0e23-c33c-47ca-a52a-6facb4316346\") " pod="kube-system/kube-proxy-9rn2t" Jan 30 14:02:44.716423 kubelet[2757]: I0130 14:02:44.714988 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/232f0e23-c33c-47ca-a52a-6facb4316346-lib-modules\") pod \"kube-proxy-9rn2t\" (UID: \"232f0e23-c33c-47ca-a52a-6facb4316346\") " pod="kube-system/kube-proxy-9rn2t" Jan 30 14:02:44.816052 kubelet[2757]: I0130 14:02:44.815905 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnrkm\" (UniqueName: \"kubernetes.io/projected/059e1b84-8401-4e90-a3c1-cdb09110d402-kube-api-access-jnrkm\") pod \"kube-flannel-ds-fxrfp\" (UID: \"059e1b84-8401-4e90-a3c1-cdb09110d402\") " pod="kube-flannel/kube-flannel-ds-fxrfp" Jan 30 14:02:44.816655 kubelet[2757]: I0130 14:02:44.816099 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/059e1b84-8401-4e90-a3c1-cdb09110d402-cni-plugin\") pod \"kube-flannel-ds-fxrfp\" (UID: \"059e1b84-8401-4e90-a3c1-cdb09110d402\") " pod="kube-flannel/kube-flannel-ds-fxrfp" Jan 30 14:02:44.816655 kubelet[2757]: I0130 14:02:44.816144 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/059e1b84-8401-4e90-a3c1-cdb09110d402-cni\") pod \"kube-flannel-ds-fxrfp\" (UID: \"059e1b84-8401-4e90-a3c1-cdb09110d402\") " pod="kube-flannel/kube-flannel-ds-fxrfp" Jan 30 14:02:44.816655 kubelet[2757]: I0130 14:02:44.816187 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/059e1b84-8401-4e90-a3c1-cdb09110d402-run\") pod \"kube-flannel-ds-fxrfp\" (UID: \"059e1b84-8401-4e90-a3c1-cdb09110d402\") " pod="kube-flannel/kube-flannel-ds-fxrfp" Jan 30 14:02:44.816655 kubelet[2757]: I0130 14:02:44.816206 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/059e1b84-8401-4e90-a3c1-cdb09110d402-flannel-cfg\") pod \"kube-flannel-ds-fxrfp\" (UID: \"059e1b84-8401-4e90-a3c1-cdb09110d402\") " pod="kube-flannel/kube-flannel-ds-fxrfp" Jan 30 14:02:44.816655 kubelet[2757]: I0130 14:02:44.816396 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/059e1b84-8401-4e90-a3c1-cdb09110d402-xtables-lock\") pod \"kube-flannel-ds-fxrfp\" (UID: \"059e1b84-8401-4e90-a3c1-cdb09110d402\") " pod="kube-flannel/kube-flannel-ds-fxrfp" Jan 30 14:02:44.882995 kubelet[2757]: E0130 14:02:44.879689 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:44.884320 containerd[1584]: time="2025-01-30T14:02:44.884124875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rn2t,Uid:232f0e23-c33c-47ca-a52a-6facb4316346,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:44.979822 containerd[1584]: time="2025-01-30T14:02:44.977533602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:44.979822 containerd[1584]: time="2025-01-30T14:02:44.979233031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:44.979822 containerd[1584]: time="2025-01-30T14:02:44.979254610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:44.979822 containerd[1584]: time="2025-01-30T14:02:44.979475447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:45.137074 containerd[1584]: time="2025-01-30T14:02:45.136277946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rn2t,Uid:232f0e23-c33c-47ca-a52a-6facb4316346,Namespace:kube-system,Attempt:0,} returns sandbox id \"94a40f111a4dce2ec2d0f5ecd52af75c3f3653226fb4989823b1c691b38f9a8f\"" Jan 30 14:02:45.139786 kubelet[2757]: E0130 14:02:45.139128 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:45.162549 containerd[1584]: time="2025-01-30T14:02:45.162403476Z" level=info msg="CreateContainer within sandbox \"94a40f111a4dce2ec2d0f5ecd52af75c3f3653226fb4989823b1c691b38f9a8f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:02:45.212417 containerd[1584]: time="2025-01-30T14:02:45.212055751Z" level=info msg="CreateContainer within sandbox \"94a40f111a4dce2ec2d0f5ecd52af75c3f3653226fb4989823b1c691b38f9a8f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"53b008d5a07fd6522266e14cdfb3586aae432cd631d646978601c50ef9442d04\"" Jan 30 14:02:45.219725 containerd[1584]: time="2025-01-30T14:02:45.214909384Z" level=info msg="StartContainer for \"53b008d5a07fd6522266e14cdfb3586aae432cd631d646978601c50ef9442d04\"" Jan 30 14:02:45.474781 containerd[1584]: time="2025-01-30T14:02:45.474562823Z" level=info msg="StartContainer for \"53b008d5a07fd6522266e14cdfb3586aae432cd631d646978601c50ef9442d04\" returns successfully" Jan 30 14:02:45.651545 kubelet[2757]: E0130 14:02:45.650474 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:45.704993 kubelet[2757]: I0130 14:02:45.704855 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9rn2t" podStartSLOduration=1.704822993 podStartE2EDuration="1.704822993s" podCreationTimestamp="2025-01-30 14:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:02:45.70408403 +0000 UTC m=+14.633039399" watchObservedRunningTime="2025-01-30 14:02:45.704822993 +0000 UTC m=+14.633778348" Jan 30 14:02:45.882237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3682413168.mount: Deactivated successfully. Jan 30 14:02:45.920891 kubelet[2757]: E0130 14:02:45.920681 2757 configmap.go:199] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:02:45.921367 kubelet[2757]: E0130 14:02:45.921246 2757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/059e1b84-8401-4e90-a3c1-cdb09110d402-flannel-cfg podName:059e1b84-8401-4e90-a3c1-cdb09110d402 nodeName:}" failed. No retries permitted until 2025-01-30 14:02:46.420793512 +0000 UTC m=+15.349748862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/059e1b84-8401-4e90-a3c1-cdb09110d402-flannel-cfg") pod "kube-flannel-ds-fxrfp" (UID: "059e1b84-8401-4e90-a3c1-cdb09110d402") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:02:46.456620 kubelet[2757]: E0130 14:02:46.455981 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:46.459929 containerd[1584]: time="2025-01-30T14:02:46.459475705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-fxrfp,Uid:059e1b84-8401-4e90-a3c1-cdb09110d402,Namespace:kube-flannel,Attempt:0,}" Jan 30 14:02:46.525345 containerd[1584]: time="2025-01-30T14:02:46.524001531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:46.525345 containerd[1584]: time="2025-01-30T14:02:46.525255947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:46.525345 containerd[1584]: time="2025-01-30T14:02:46.525306850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:46.526743 containerd[1584]: time="2025-01-30T14:02:46.525739904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:46.697538 containerd[1584]: time="2025-01-30T14:02:46.697432892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-fxrfp,Uid:059e1b84-8401-4e90-a3c1-cdb09110d402,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"eb7ad2f804fdaf92a2ec7598b531fbce63d4e0b1f6593192d4eb2daa04863df8\"" Jan 30 14:02:46.701557 kubelet[2757]: E0130 14:02:46.701515 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:46.705773 containerd[1584]: time="2025-01-30T14:02:46.704771659Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 30 14:02:49.066589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775655389.mount: Deactivated successfully. Jan 30 14:02:49.222362 containerd[1584]: time="2025-01-30T14:02:49.222276942Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:49.228970 containerd[1584]: time="2025-01-30T14:02:49.228832123Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 30 14:02:49.230210 containerd[1584]: time="2025-01-30T14:02:49.229758792Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:49.235575 containerd[1584]: time="2025-01-30T14:02:49.235476937Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:49.238047 containerd[1584]: time="2025-01-30T14:02:49.237930267Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.5330968s" Jan 30 14:02:49.238047 containerd[1584]: time="2025-01-30T14:02:49.238032827Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 30 14:02:49.246530 containerd[1584]: time="2025-01-30T14:02:49.246468720Z" level=info msg="CreateContainer within sandbox \"eb7ad2f804fdaf92a2ec7598b531fbce63d4e0b1f6593192d4eb2daa04863df8\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 30 14:02:49.332885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3606258586.mount: Deactivated successfully. Jan 30 14:02:49.341749 containerd[1584]: time="2025-01-30T14:02:49.338371502Z" level=info msg="CreateContainer within sandbox \"eb7ad2f804fdaf92a2ec7598b531fbce63d4e0b1f6593192d4eb2daa04863df8\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"bf4f6004c2dc591c25f13ce02029aec74d76168d735bdc164c6877a245b74680\"" Jan 30 14:02:49.343596 containerd[1584]: time="2025-01-30T14:02:49.343514449Z" level=info msg="StartContainer for \"bf4f6004c2dc591c25f13ce02029aec74d76168d735bdc164c6877a245b74680\"" Jan 30 14:02:49.510268 containerd[1584]: time="2025-01-30T14:02:49.509529184Z" level=info msg="StartContainer for \"bf4f6004c2dc591c25f13ce02029aec74d76168d735bdc164c6877a245b74680\" returns successfully" Jan 30 14:02:49.607116 containerd[1584]: time="2025-01-30T14:02:49.606696003Z" level=info msg="shim disconnected" id=bf4f6004c2dc591c25f13ce02029aec74d76168d735bdc164c6877a245b74680 namespace=k8s.io Jan 30 14:02:49.607116 containerd[1584]: time="2025-01-30T14:02:49.606787946Z" level=warning msg="cleaning up after shim disconnected" id=bf4f6004c2dc591c25f13ce02029aec74d76168d735bdc164c6877a245b74680 namespace=k8s.io Jan 30 14:02:49.607116 containerd[1584]: time="2025-01-30T14:02:49.606803887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:02:49.690295 kubelet[2757]: E0130 14:02:49.689977 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:49.700479 containerd[1584]: time="2025-01-30T14:02:49.696899035Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 30 14:02:49.827828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf4f6004c2dc591c25f13ce02029aec74d76168d735bdc164c6877a245b74680-rootfs.mount: Deactivated successfully. Jan 30 14:02:52.294520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954046921.mount: Deactivated successfully. Jan 30 14:02:54.113209 containerd[1584]: time="2025-01-30T14:02:54.113078679Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:54.118997 containerd[1584]: time="2025-01-30T14:02:54.117645669Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:54.118997 containerd[1584]: time="2025-01-30T14:02:54.118190872Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 30 14:02:54.125173 containerd[1584]: time="2025-01-30T14:02:54.125067582Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:54.129193 containerd[1584]: time="2025-01-30T14:02:54.128296722Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.427809588s" Jan 30 14:02:54.129193 containerd[1584]: time="2025-01-30T14:02:54.128394475Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 30 14:02:54.134916 containerd[1584]: time="2025-01-30T14:02:54.134840642Z" level=info msg="CreateContainer within sandbox \"eb7ad2f804fdaf92a2ec7598b531fbce63d4e0b1f6593192d4eb2daa04863df8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:02:54.184615 containerd[1584]: time="2025-01-30T14:02:54.180591816Z" level=info msg="CreateContainer within sandbox \"eb7ad2f804fdaf92a2ec7598b531fbce63d4e0b1f6593192d4eb2daa04863df8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b932e5e733f9e3471957f4d3f019b904f4877392339bfef43adc14802bf6930c\"" Jan 30 14:02:54.195856 containerd[1584]: time="2025-01-30T14:02:54.194227017Z" level=info msg="StartContainer for \"b932e5e733f9e3471957f4d3f019b904f4877392339bfef43adc14802bf6930c\"" Jan 30 14:02:54.363483 containerd[1584]: time="2025-01-30T14:02:54.363326212Z" level=info msg="StartContainer for \"b932e5e733f9e3471957f4d3f019b904f4877392339bfef43adc14802bf6930c\" returns successfully" Jan 30 14:02:54.403253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b932e5e733f9e3471957f4d3f019b904f4877392339bfef43adc14802bf6930c-rootfs.mount: Deactivated successfully. Jan 30 14:02:54.406059 containerd[1584]: time="2025-01-30T14:02:54.404655532Z" level=info msg="shim disconnected" id=b932e5e733f9e3471957f4d3f019b904f4877392339bfef43adc14802bf6930c namespace=k8s.io Jan 30 14:02:54.406059 containerd[1584]: time="2025-01-30T14:02:54.404753293Z" level=warning msg="cleaning up after shim disconnected" id=b932e5e733f9e3471957f4d3f019b904f4877392339bfef43adc14802bf6930c namespace=k8s.io Jan 30 14:02:54.406059 containerd[1584]: time="2025-01-30T14:02:54.404768767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:02:54.420013 kubelet[2757]: I0130 14:02:54.419442 2757 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:02:54.447070 containerd[1584]: time="2025-01-30T14:02:54.446836127Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:02:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:02:54.509032 kubelet[2757]: I0130 14:02:54.508257 2757 topology_manager.go:215] "Topology Admit Handler" podUID="393557f5-5039-40ec-9f28-041efc79b269" podNamespace="kube-system" podName="coredns-7db6d8ff4d-trdcz" Jan 30 14:02:54.509032 kubelet[2757]: I0130 14:02:54.508755 2757 topology_manager.go:215] "Topology Admit Handler" podUID="145dd1c8-f19a-40fb-ba55-1f0ed2382665" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9ccbc" Jan 30 14:02:54.686340 kubelet[2757]: I0130 14:02:54.686091 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv8wv\" (UniqueName: \"kubernetes.io/projected/145dd1c8-f19a-40fb-ba55-1f0ed2382665-kube-api-access-cv8wv\") pod \"coredns-7db6d8ff4d-9ccbc\" (UID: \"145dd1c8-f19a-40fb-ba55-1f0ed2382665\") " pod="kube-system/coredns-7db6d8ff4d-9ccbc" Jan 30 14:02:54.686778 kubelet[2757]: I0130 14:02:54.686589 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393557f5-5039-40ec-9f28-041efc79b269-config-volume\") pod \"coredns-7db6d8ff4d-trdcz\" (UID: \"393557f5-5039-40ec-9f28-041efc79b269\") " pod="kube-system/coredns-7db6d8ff4d-trdcz" Jan 30 14:02:54.686778 kubelet[2757]: I0130 14:02:54.686684 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx4w4\" (UniqueName: \"kubernetes.io/projected/393557f5-5039-40ec-9f28-041efc79b269-kube-api-access-xx4w4\") pod \"coredns-7db6d8ff4d-trdcz\" (UID: \"393557f5-5039-40ec-9f28-041efc79b269\") " pod="kube-system/coredns-7db6d8ff4d-trdcz" Jan 30 14:02:54.686778 kubelet[2757]: I0130 14:02:54.686706 2757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/145dd1c8-f19a-40fb-ba55-1f0ed2382665-config-volume\") pod \"coredns-7db6d8ff4d-9ccbc\" (UID: \"145dd1c8-f19a-40fb-ba55-1f0ed2382665\") " pod="kube-system/coredns-7db6d8ff4d-9ccbc" Jan 30 14:02:54.718243 kubelet[2757]: E0130 14:02:54.718177 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:54.729039 containerd[1584]: time="2025-01-30T14:02:54.727977231Z" level=info msg="CreateContainer within sandbox \"eb7ad2f804fdaf92a2ec7598b531fbce63d4e0b1f6593192d4eb2daa04863df8\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 30 14:02:54.763582 containerd[1584]: time="2025-01-30T14:02:54.763421276Z" level=info msg="CreateContainer within sandbox \"eb7ad2f804fdaf92a2ec7598b531fbce63d4e0b1f6593192d4eb2daa04863df8\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"9e09580e5e8f446ce34d82b3b3c00aac4182d6b1043bd29bff4e8f4bd3b40786\"" Jan 30 14:02:54.767034 containerd[1584]: time="2025-01-30T14:02:54.766442981Z" level=info msg="StartContainer for \"9e09580e5e8f446ce34d82b3b3c00aac4182d6b1043bd29bff4e8f4bd3b40786\"" Jan 30 14:02:54.833907 kubelet[2757]: E0130 14:02:54.833538 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:54.843690 containerd[1584]: time="2025-01-30T14:02:54.843408529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9ccbc,Uid:145dd1c8-f19a-40fb-ba55-1f0ed2382665,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:54.955886 containerd[1584]: time="2025-01-30T14:02:54.955654407Z" level=info msg="StartContainer for \"9e09580e5e8f446ce34d82b3b3c00aac4182d6b1043bd29bff4e8f4bd3b40786\" returns successfully" Jan 30 14:02:54.996075 containerd[1584]: time="2025-01-30T14:02:54.995706197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9ccbc,Uid:145dd1c8-f19a-40fb-ba55-1f0ed2382665,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a618299de90097ad0e35e60bda320aec341e2bfd12c99dda8fe2f5c728462e0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 14:02:54.997029 kubelet[2757]: E0130 14:02:54.996297 2757 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a618299de90097ad0e35e60bda320aec341e2bfd12c99dda8fe2f5c728462e0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 14:02:54.997029 kubelet[2757]: E0130 14:02:54.996563 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a618299de90097ad0e35e60bda320aec341e2bfd12c99dda8fe2f5c728462e0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-9ccbc" Jan 30 14:02:54.997029 kubelet[2757]: E0130 14:02:54.996612 2757 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a618299de90097ad0e35e60bda320aec341e2bfd12c99dda8fe2f5c728462e0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-9ccbc" Jan 30 14:02:54.997029 kubelet[2757]: E0130 14:02:54.996708 2757 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9ccbc_kube-system(145dd1c8-f19a-40fb-ba55-1f0ed2382665)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9ccbc_kube-system(145dd1c8-f19a-40fb-ba55-1f0ed2382665)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a618299de90097ad0e35e60bda320aec341e2bfd12c99dda8fe2f5c728462e0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-9ccbc" podUID="145dd1c8-f19a-40fb-ba55-1f0ed2382665" Jan 30 14:02:55.135333 kubelet[2757]: E0130 14:02:55.135266 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:55.136413 containerd[1584]: time="2025-01-30T14:02:55.136351809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-trdcz,Uid:393557f5-5039-40ec-9f28-041efc79b269,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:55.245122 systemd[1]: run-netns-cni\x2dcf2dde01\x2d3cc0\x2df150\x2d2746\x2dd8b67a5d7448.mount: Deactivated successfully. Jan 30 14:02:55.245467 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10f28e143ae1b3bfa870f069f615f7cad836b1cbeaed85f10870f355877c8a61-shm.mount: Deactivated successfully. Jan 30 14:02:55.268408 containerd[1584]: time="2025-01-30T14:02:55.268236116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-trdcz,Uid:393557f5-5039-40ec-9f28-041efc79b269,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"10f28e143ae1b3bfa870f069f615f7cad836b1cbeaed85f10870f355877c8a61\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 14:02:55.269771 kubelet[2757]: E0130 14:02:55.269151 2757 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f28e143ae1b3bfa870f069f615f7cad836b1cbeaed85f10870f355877c8a61\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 14:02:55.269771 kubelet[2757]: E0130 14:02:55.269248 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f28e143ae1b3bfa870f069f615f7cad836b1cbeaed85f10870f355877c8a61\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-trdcz" Jan 30 14:02:55.269771 kubelet[2757]: E0130 14:02:55.269283 2757 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f28e143ae1b3bfa870f069f615f7cad836b1cbeaed85f10870f355877c8a61\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-trdcz" Jan 30 14:02:55.269771 kubelet[2757]: E0130 14:02:55.269370 2757 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-trdcz_kube-system(393557f5-5039-40ec-9f28-041efc79b269)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-trdcz_kube-system(393557f5-5039-40ec-9f28-041efc79b269)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10f28e143ae1b3bfa870f069f615f7cad836b1cbeaed85f10870f355877c8a61\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-trdcz" podUID="393557f5-5039-40ec-9f28-041efc79b269" Jan 30 14:02:55.737687 kubelet[2757]: E0130 14:02:55.737631 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:55.762577 kubelet[2757]: I0130 14:02:55.762063 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-fxrfp" podStartSLOduration=4.334922177 podStartE2EDuration="11.762030575s" podCreationTimestamp="2025-01-30 14:02:44 +0000 UTC" firstStartedPulling="2025-01-30 14:02:46.703717069 +0000 UTC m=+15.632672423" lastFinishedPulling="2025-01-30 14:02:54.130825469 +0000 UTC m=+23.059780821" observedRunningTime="2025-01-30 14:02:55.761878392 +0000 UTC m=+24.690833759" watchObservedRunningTime="2025-01-30 14:02:55.762030575 +0000 UTC m=+24.690985936" Jan 30 14:02:56.119145 systemd-networkd[1226]: flannel.1: Link UP Jan 30 14:02:56.119155 systemd-networkd[1226]: flannel.1: Gained carrier Jan 30 14:02:56.742831 kubelet[2757]: E0130 14:02:56.741187 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:57.192686 systemd-networkd[1226]: flannel.1: Gained IPv6LL Jan 30 14:03:07.440084 kubelet[2757]: E0130 14:03:07.434552 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:07.440903 containerd[1584]: time="2025-01-30T14:03:07.438521278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9ccbc,Uid:145dd1c8-f19a-40fb-ba55-1f0ed2382665,Namespace:kube-system,Attempt:0,}" Jan 30 14:03:07.548022 systemd-networkd[1226]: cni0: Link UP Jan 30 14:03:07.548032 systemd-networkd[1226]: cni0: Gained carrier Jan 30 14:03:07.559511 systemd-networkd[1226]: cni0: Lost carrier Jan 30 14:03:07.577139 kernel: cni0: port 1(veth27766922) entered blocking state Jan 30 14:03:07.577386 kernel: cni0: port 1(veth27766922) entered disabled state Jan 30 14:03:07.575079 systemd-networkd[1226]: veth27766922: Link UP Jan 30 14:03:07.580110 kernel: veth27766922: entered allmulticast mode Jan 30 14:03:07.580408 kernel: veth27766922: entered promiscuous mode Jan 30 14:03:07.590201 kernel: cni0: port 1(veth27766922) entered blocking state Jan 30 14:03:07.590397 kernel: cni0: port 1(veth27766922) entered forwarding state Jan 30 14:03:07.590432 kernel: cni0: port 1(veth27766922) entered disabled state Jan 30 14:03:07.618105 kernel: cni0: port 1(veth27766922) entered blocking state Jan 30 14:03:07.620164 kernel: cni0: port 1(veth27766922) entered forwarding state Jan 30 14:03:07.618515 systemd-networkd[1226]: veth27766922: Gained carrier Jan 30 14:03:07.624829 systemd-networkd[1226]: cni0: Gained carrier Jan 30 14:03:07.648981 containerd[1584]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 30 14:03:07.648981 containerd[1584]: delegateAdd: netconf sent to delegate plugin: Jan 30 14:03:07.737101 containerd[1584]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T14:03:07.735543380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:03:07.737101 containerd[1584]: time="2025-01-30T14:03:07.735688458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:03:07.737101 containerd[1584]: time="2025-01-30T14:03:07.735745591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:07.738886 containerd[1584]: time="2025-01-30T14:03:07.738477583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:07.846720 systemd[1]: run-containerd-runc-k8s.io-a3fcfb47816c294f5bde9b824f437e3aa5644cb2401874e30336240cf5ce940e-runc.KgBcXv.mount: Deactivated successfully. Jan 30 14:03:07.964815 containerd[1584]: time="2025-01-30T14:03:07.964523994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9ccbc,Uid:145dd1c8-f19a-40fb-ba55-1f0ed2382665,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3fcfb47816c294f5bde9b824f437e3aa5644cb2401874e30336240cf5ce940e\"" Jan 30 14:03:08.002218 kubelet[2757]: E0130 14:03:07.999154 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:08.066423 containerd[1584]: time="2025-01-30T14:03:08.066344018Z" level=info msg="CreateContainer within sandbox \"a3fcfb47816c294f5bde9b824f437e3aa5644cb2401874e30336240cf5ce940e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:03:08.129823 containerd[1584]: time="2025-01-30T14:03:08.129653808Z" level=info msg="CreateContainer within sandbox \"a3fcfb47816c294f5bde9b824f437e3aa5644cb2401874e30336240cf5ce940e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be43bcb0db2d01d2f6a15ca469dda9b482b5ef83df573e0536d992303243557c\"" Jan 30 14:03:08.132444 containerd[1584]: time="2025-01-30T14:03:08.130851601Z" level=info msg="StartContainer for \"be43bcb0db2d01d2f6a15ca469dda9b482b5ef83df573e0536d992303243557c\"" Jan 30 14:03:08.295972 containerd[1584]: time="2025-01-30T14:03:08.295496964Z" level=info msg="StartContainer for \"be43bcb0db2d01d2f6a15ca469dda9b482b5ef83df573e0536d992303243557c\" returns successfully" Jan 30 14:03:08.446000 kubelet[2757]: E0130 14:03:08.441613 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:08.452002 containerd[1584]: time="2025-01-30T14:03:08.448014921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-trdcz,Uid:393557f5-5039-40ec-9f28-041efc79b269,Namespace:kube-system,Attempt:0,}" Jan 30 14:03:08.565227 systemd-networkd[1226]: veth6f068afe: Link UP Jan 30 14:03:08.574578 kernel: cni0: port 2(veth6f068afe) entered blocking state Jan 30 14:03:08.574878 kernel: cni0: port 2(veth6f068afe) entered disabled state Jan 30 14:03:08.577015 kernel: veth6f068afe: entered allmulticast mode Jan 30 14:03:08.588302 kernel: veth6f068afe: entered promiscuous mode Jan 30 14:03:08.618806 kernel: cni0: port 2(veth6f068afe) entered blocking state Jan 30 14:03:08.619031 kernel: cni0: port 2(veth6f068afe) entered forwarding state Jan 30 14:03:08.620336 systemd-networkd[1226]: veth6f068afe: Gained carrier Jan 30 14:03:08.662795 containerd[1584]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c938), "name":"cbr0", "type":"bridge"} Jan 30 14:03:08.662795 containerd[1584]: delegateAdd: netconf sent to delegate plugin: Jan 30 14:03:08.739930 containerd[1584]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T14:03:08.739657534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:03:08.740210 containerd[1584]: time="2025-01-30T14:03:08.740144313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:03:08.740275 containerd[1584]: time="2025-01-30T14:03:08.740214216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:08.746006 containerd[1584]: time="2025-01-30T14:03:08.743061062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:08.862144 kubelet[2757]: E0130 14:03:08.862068 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:08.929486 kubelet[2757]: I0130 14:03:08.929379 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9ccbc" podStartSLOduration=24.929353084 podStartE2EDuration="24.929353084s" podCreationTimestamp="2025-01-30 14:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:03:08.893634665 +0000 UTC m=+37.822590017" watchObservedRunningTime="2025-01-30 14:03:08.929353084 +0000 UTC m=+37.858308433" Jan 30 14:03:08.986698 containerd[1584]: time="2025-01-30T14:03:08.985998383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-trdcz,Uid:393557f5-5039-40ec-9f28-041efc79b269,Namespace:kube-system,Attempt:0,} returns sandbox id \"d57fb69f19b45a6f5293817233afb6aa0f4d4036cb381261246625bc99696087\"" Jan 30 14:03:08.989305 kubelet[2757]: E0130 14:03:08.987736 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:08.999404 containerd[1584]: time="2025-01-30T14:03:08.998919412Z" level=info msg="CreateContainer within sandbox \"d57fb69f19b45a6f5293817233afb6aa0f4d4036cb381261246625bc99696087\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:03:09.036546 containerd[1584]: time="2025-01-30T14:03:09.036478035Z" level=info msg="CreateContainer within sandbox \"d57fb69f19b45a6f5293817233afb6aa0f4d4036cb381261246625bc99696087\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2eea9e1edae2cff169147d5e824bce74536fe31a93e796ae1495ead8e3b88dc2\"" Jan 30 14:03:09.040748 containerd[1584]: time="2025-01-30T14:03:09.040349978Z" level=info msg="StartContainer for \"2eea9e1edae2cff169147d5e824bce74536fe31a93e796ae1495ead8e3b88dc2\"" Jan 30 14:03:09.190839 containerd[1584]: time="2025-01-30T14:03:09.190644206Z" level=info msg="StartContainer for \"2eea9e1edae2cff169147d5e824bce74536fe31a93e796ae1495ead8e3b88dc2\" returns successfully" Jan 30 14:03:09.226211 systemd-networkd[1226]: cni0: Gained IPv6LL Jan 30 14:03:09.352624 systemd-networkd[1226]: veth27766922: Gained IPv6LL Jan 30 14:03:09.864050 kubelet[2757]: E0130 14:03:09.863411 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:09.869546 kubelet[2757]: E0130 14:03:09.869152 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:09.892101 kubelet[2757]: I0130 14:03:09.891876 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-trdcz" podStartSLOduration=25.891842134 podStartE2EDuration="25.891842134s" podCreationTimestamp="2025-01-30 14:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:03:09.89150747 +0000 UTC m=+38.820462833" watchObservedRunningTime="2025-01-30 14:03:09.891842134 +0000 UTC m=+38.820797493" Jan 30 14:03:10.120408 systemd-networkd[1226]: veth6f068afe: Gained IPv6LL Jan 30 14:03:10.867828 kubelet[2757]: E0130 14:03:10.866261 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:15.137976 kubelet[2757]: E0130 14:03:15.137487 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:15.894683 kubelet[2757]: E0130 14:03:15.894056 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:20.330608 systemd[1]: Started sshd@8-146.190.128.120:22-147.75.109.163:35934.service - OpenSSH per-connection server daemon (147.75.109.163:35934). Jan 30 14:03:20.425763 sshd[3679]: Accepted publickey for core from 147.75.109.163 port 35934 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:20.428827 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:20.440302 systemd-logind[1562]: New session 8 of user core. Jan 30 14:03:20.447575 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:03:20.705299 sshd[3679]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:20.715604 systemd[1]: sshd@8-146.190.128.120:22-147.75.109.163:35934.service: Deactivated successfully. Jan 30 14:03:20.720967 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:03:20.721184 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:03:20.726344 systemd-logind[1562]: Removed session 8. Jan 30 14:03:25.732220 systemd[1]: Started sshd@9-146.190.128.120:22-147.75.109.163:35944.service - OpenSSH per-connection server daemon (147.75.109.163:35944). Jan 30 14:03:25.757384 kernel: hrtimer: interrupt took 11116956 ns Jan 30 14:03:25.835352 sshd[3715]: Accepted publickey for core from 147.75.109.163 port 35944 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:25.843045 sshd[3715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:25.856479 systemd-logind[1562]: New session 9 of user core. Jan 30 14:03:25.877450 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:03:26.195445 sshd[3715]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:26.202664 systemd[1]: sshd@9-146.190.128.120:22-147.75.109.163:35944.service: Deactivated successfully. Jan 30 14:03:26.220769 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:03:26.223130 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:03:26.227436 systemd-logind[1562]: Removed session 9. Jan 30 14:03:31.205463 systemd[1]: Started sshd@10-146.190.128.120:22-147.75.109.163:41850.service - OpenSSH per-connection server daemon (147.75.109.163:41850). Jan 30 14:03:31.269869 sshd[3751]: Accepted publickey for core from 147.75.109.163 port 41850 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:31.274735 sshd[3751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:31.288563 systemd-logind[1562]: New session 10 of user core. Jan 30 14:03:31.303652 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:03:31.569417 sshd[3751]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:31.583342 systemd[1]: sshd@10-146.190.128.120:22-147.75.109.163:41850.service: Deactivated successfully. Jan 30 14:03:31.595580 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:03:31.601808 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:03:31.606011 systemd-logind[1562]: Removed session 10. Jan 30 14:03:36.583611 systemd[1]: Started sshd@11-146.190.128.120:22-147.75.109.163:41858.service - OpenSSH per-connection server daemon (147.75.109.163:41858). Jan 30 14:03:36.650038 sshd[3795]: Accepted publickey for core from 147.75.109.163 port 41858 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:36.653099 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:36.678565 systemd-logind[1562]: New session 11 of user core. Jan 30 14:03:36.682626 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:03:37.013598 sshd[3795]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:37.030624 systemd[1]: Started sshd@12-146.190.128.120:22-147.75.109.163:41866.service - OpenSSH per-connection server daemon (147.75.109.163:41866). Jan 30 14:03:37.038671 systemd[1]: sshd@11-146.190.128.120:22-147.75.109.163:41858.service: Deactivated successfully. Jan 30 14:03:37.045861 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:03:37.050209 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:03:37.060491 systemd-logind[1562]: Removed session 11. Jan 30 14:03:37.126014 sshd[3822]: Accepted publickey for core from 147.75.109.163 port 41866 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:37.129245 sshd[3822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:37.140882 systemd-logind[1562]: New session 12 of user core. Jan 30 14:03:37.148793 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:03:37.452360 sshd[3822]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:37.469222 systemd[1]: Started sshd@13-146.190.128.120:22-147.75.109.163:52342.service - OpenSSH per-connection server daemon (147.75.109.163:52342). Jan 30 14:03:37.473339 systemd[1]: sshd@12-146.190.128.120:22-147.75.109.163:41866.service: Deactivated successfully. Jan 30 14:03:37.514428 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:03:37.532201 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:03:37.543491 systemd-logind[1562]: Removed session 12. Jan 30 14:03:37.634933 sshd[3834]: Accepted publickey for core from 147.75.109.163 port 52342 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:37.642813 sshd[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:37.660832 systemd-logind[1562]: New session 13 of user core. Jan 30 14:03:37.667719 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:03:37.964663 sshd[3834]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:37.988161 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:03:37.989246 systemd[1]: sshd@13-146.190.128.120:22-147.75.109.163:52342.service: Deactivated successfully. Jan 30 14:03:37.995661 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:03:37.998513 systemd-logind[1562]: Removed session 13. Jan 30 14:03:42.433567 kubelet[2757]: E0130 14:03:42.432852 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:42.985702 systemd[1]: Started sshd@14-146.190.128.120:22-147.75.109.163:52348.service - OpenSSH per-connection server daemon (147.75.109.163:52348). Jan 30 14:03:43.044845 sshd[3872]: Accepted publickey for core from 147.75.109.163 port 52348 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:43.045618 sshd[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:43.061595 systemd-logind[1562]: New session 14 of user core. Jan 30 14:03:43.066663 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:03:43.348086 sshd[3872]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:43.361765 systemd[1]: Started sshd@15-146.190.128.120:22-147.75.109.163:52362.service - OpenSSH per-connection server daemon (147.75.109.163:52362). Jan 30 14:03:43.362926 systemd[1]: sshd@14-146.190.128.120:22-147.75.109.163:52348.service: Deactivated successfully. Jan 30 14:03:43.381156 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:03:43.382106 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:03:43.385505 systemd-logind[1562]: Removed session 14. Jan 30 14:03:43.435875 sshd[3883]: Accepted publickey for core from 147.75.109.163 port 52362 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:43.443613 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:43.463414 systemd-logind[1562]: New session 15 of user core. Jan 30 14:03:43.469756 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:03:44.089096 sshd[3883]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:44.099706 systemd[1]: Started sshd@16-146.190.128.120:22-147.75.109.163:52378.service - OpenSSH per-connection server daemon (147.75.109.163:52378). Jan 30 14:03:44.102333 systemd[1]: sshd@15-146.190.128.120:22-147.75.109.163:52362.service: Deactivated successfully. Jan 30 14:03:44.111873 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:03:44.118465 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:03:44.131493 systemd-logind[1562]: Removed session 15. Jan 30 14:03:44.214431 sshd[3895]: Accepted publickey for core from 147.75.109.163 port 52378 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:44.218874 sshd[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:44.232794 systemd-logind[1562]: New session 16 of user core. Jan 30 14:03:44.237995 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:03:46.999664 sshd[3895]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:47.030528 systemd[1]: Started sshd@17-146.190.128.120:22-147.75.109.163:52384.service - OpenSSH per-connection server daemon (147.75.109.163:52384). Jan 30 14:03:47.031550 systemd[1]: sshd@16-146.190.128.120:22-147.75.109.163:52378.service: Deactivated successfully. Jan 30 14:03:47.058879 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:03:47.073102 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:03:47.105023 systemd-logind[1562]: Removed session 16. Jan 30 14:03:47.170369 sshd[3923]: Accepted publickey for core from 147.75.109.163 port 52384 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:47.174411 sshd[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:47.197476 systemd-logind[1562]: New session 17 of user core. Jan 30 14:03:47.206888 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:03:47.977301 sshd[3923]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:48.005183 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:03:48.007778 systemd[1]: sshd@17-146.190.128.120:22-147.75.109.163:52384.service: Deactivated successfully. Jan 30 14:03:48.019605 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:03:48.049309 systemd[1]: Started sshd@18-146.190.128.120:22-147.75.109.163:59864.service - OpenSSH per-connection server daemon (147.75.109.163:59864). Jan 30 14:03:48.053193 systemd-logind[1562]: Removed session 17. Jan 30 14:03:48.128430 sshd[3954]: Accepted publickey for core from 147.75.109.163 port 59864 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:48.132402 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:48.145025 systemd-logind[1562]: New session 18 of user core. Jan 30 14:03:48.150705 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:03:48.467900 sshd[3954]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:48.487991 systemd[1]: sshd@18-146.190.128.120:22-147.75.109.163:59864.service: Deactivated successfully. Jan 30 14:03:48.498526 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:03:48.502450 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:03:48.506219 systemd-logind[1562]: Removed session 18. Jan 30 14:03:51.434982 kubelet[2757]: E0130 14:03:51.434193 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:53.474644 systemd[1]: Started sshd@19-146.190.128.120:22-147.75.109.163:59872.service - OpenSSH per-connection server daemon (147.75.109.163:59872). Jan 30 14:03:53.552346 sshd[3992]: Accepted publickey for core from 147.75.109.163 port 59872 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:53.555775 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:53.569216 systemd-logind[1562]: New session 19 of user core. Jan 30 14:03:53.573805 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:03:53.808542 sshd[3992]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:53.818791 systemd[1]: sshd@19-146.190.128.120:22-147.75.109.163:59872.service: Deactivated successfully. Jan 30 14:03:53.832533 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:03:53.833399 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:03:53.838705 systemd-logind[1562]: Removed session 19. Jan 30 14:03:58.438959 kubelet[2757]: E0130 14:03:58.433391 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:03:58.821626 systemd[1]: Started sshd@20-146.190.128.120:22-147.75.109.163:55916.service - OpenSSH per-connection server daemon (147.75.109.163:55916). Jan 30 14:03:58.897808 sshd[4026]: Accepted publickey for core from 147.75.109.163 port 55916 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:03:58.898899 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:58.907500 systemd-logind[1562]: New session 20 of user core. Jan 30 14:03:58.916852 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:03:59.144132 sshd[4026]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:59.148358 systemd[1]: sshd@20-146.190.128.120:22-147.75.109.163:55916.service: Deactivated successfully. Jan 30 14:03:59.159400 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:03:59.161095 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:03:59.168351 systemd-logind[1562]: Removed session 20. Jan 30 14:04:04.164644 systemd[1]: Started sshd@21-146.190.128.120:22-147.75.109.163:55930.service - OpenSSH per-connection server daemon (147.75.109.163:55930). Jan 30 14:04:04.233753 sshd[4061]: Accepted publickey for core from 147.75.109.163 port 55930 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:04:04.238653 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:04.248016 systemd-logind[1562]: New session 21 of user core. Jan 30 14:04:04.258078 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:04:04.523294 sshd[4061]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:04.529631 systemd[1]: sshd@21-146.190.128.120:22-147.75.109.163:55930.service: Deactivated successfully. Jan 30 14:04:04.540051 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:04:04.543763 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:04:04.546686 systemd-logind[1562]: Removed session 21. Jan 30 14:04:09.541663 systemd[1]: Started sshd@22-146.190.128.120:22-147.75.109.163:44562.service - OpenSSH per-connection server daemon (147.75.109.163:44562). Jan 30 14:04:09.613001 sshd[4096]: Accepted publickey for core from 147.75.109.163 port 44562 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:04:09.615725 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:09.627046 systemd-logind[1562]: New session 22 of user core. Jan 30 14:04:09.632850 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:04:09.871340 sshd[4096]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:09.876795 systemd[1]: sshd@22-146.190.128.120:22-147.75.109.163:44562.service: Deactivated successfully. Jan 30 14:04:09.884753 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:04:09.885183 systemd-logind[1562]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:04:09.889344 systemd-logind[1562]: Removed session 22.