Feb 13 20:15:57.261421 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:15:57.261472 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:57.261494 kernel: BIOS-provided physical RAM map: Feb 13 20:15:57.261507 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:15:57.261518 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:15:57.261529 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:15:57.261543 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Feb 13 20:15:57.261555 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Feb 13 20:15:57.261566 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 20:15:57.261580 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:15:57.261593 kernel: NX (Execute Disable) protection: active Feb 13 20:15:57.261603 kernel: APIC: Static calls initialized Feb 13 20:15:57.261622 kernel: SMBIOS 2.8 present. Feb 13 20:15:57.261632 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 13 20:15:57.261645 kernel: Hypervisor detected: KVM Feb 13 20:15:57.261659 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:15:57.261677 kernel: kvm-clock: using sched offset of 4070558646 cycles Feb 13 20:15:57.261690 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:15:57.261703 kernel: tsc: Detected 1995.312 MHz processor Feb 13 20:15:57.261715 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:15:57.261728 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:15:57.261740 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Feb 13 20:15:57.261753 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:15:57.261766 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:15:57.261783 kernel: ACPI: Early table checksum verification disabled Feb 13 20:15:57.261796 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Feb 13 20:15:57.261809 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:57.261821 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:57.261833 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:57.261846 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 13 20:15:57.261858 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:57.261871 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:57.261884 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:57.261900 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:57.261913 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 13 20:15:57.261925 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 13 20:15:57.261969 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 13 20:15:57.261982 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 13 20:15:57.261994 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 13 20:15:57.262008 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 13 20:15:57.262032 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 13 20:15:57.262046 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:15:57.262060 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:15:57.262073 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:15:57.262087 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 20:15:57.262106 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Feb 13 20:15:57.262120 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Feb 13 20:15:57.262138 kernel: Zone ranges: Feb 13 20:15:57.262152 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:15:57.262165 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Feb 13 20:15:57.262178 kernel: Normal empty Feb 13 20:15:57.262192 kernel: Movable zone start for each node Feb 13 20:15:57.262205 kernel: Early memory node ranges Feb 13 20:15:57.262218 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:15:57.262232 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Feb 13 20:15:57.262245 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Feb 13 20:15:57.262262 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:15:57.262275 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:15:57.262294 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Feb 13 20:15:57.262308 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:15:57.262320 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:15:57.262334 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:15:57.262348 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:15:57.262361 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:15:57.262374 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:15:57.262392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:15:57.262405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:15:57.262419 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:15:57.262432 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:15:57.262446 kernel: TSC deadline timer available Feb 13 20:15:57.262458 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:15:57.262472 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:15:57.262485 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 13 20:15:57.262504 kernel: Booting paravirtualized kernel on KVM Feb 13 20:15:57.262518 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:15:57.262537 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:15:57.262551 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:15:57.262565 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:15:57.262580 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:15:57.262594 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 20:15:57.262611 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:57.262626 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:15:57.262643 kernel: random: crng init done Feb 13 20:15:57.262657 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:15:57.262671 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:15:57.262685 kernel: Fallback order for Node 0: 0 Feb 13 20:15:57.262699 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Feb 13 20:15:57.262713 kernel: Policy zone: DMA32 Feb 13 20:15:57.262727 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:15:57.262742 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 125148K reserved, 0K cma-reserved) Feb 13 20:15:57.262756 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:15:57.262774 kernel: Kernel/User page tables isolation: enabled Feb 13 20:15:57.262788 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:15:57.262802 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:15:57.262815 kernel: Dynamic Preempt: voluntary Feb 13 20:15:57.262829 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:15:57.262844 kernel: rcu: RCU event tracing is enabled. Feb 13 20:15:57.262858 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:15:57.262873 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:15:57.262887 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:15:57.262906 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:15:57.262921 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:15:57.262980 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:15:57.262995 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:15:57.263009 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:15:57.263028 kernel: Console: colour VGA+ 80x25 Feb 13 20:15:57.263041 kernel: printk: console [tty0] enabled Feb 13 20:15:57.263055 kernel: printk: console [ttyS0] enabled Feb 13 20:15:57.263069 kernel: ACPI: Core revision 20230628 Feb 13 20:15:57.263083 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 20:15:57.263102 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:15:57.263114 kernel: x2apic enabled Feb 13 20:15:57.263126 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:15:57.263137 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:15:57.263151 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Feb 13 20:15:57.263166 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Feb 13 20:15:57.263179 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 20:15:57.263193 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 20:15:57.263225 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:15:57.263239 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:15:57.263255 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:15:57.263274 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:15:57.263290 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 20:15:57.263306 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:15:57.263321 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:15:57.263336 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:15:57.263351 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:15:57.263376 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:15:57.263391 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:15:57.263406 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:15:57.263422 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:15:57.263436 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:15:57.263451 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:15:57.263466 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:15:57.263481 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:15:57.263501 kernel: landlock: Up and running. Feb 13 20:15:57.263517 kernel: SELinux: Initializing. Feb 13 20:15:57.263532 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:15:57.263546 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:15:57.263563 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Feb 13 20:15:57.263578 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:57.263594 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:57.263610 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:57.263625 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Feb 13 20:15:57.263646 kernel: signal: max sigframe size: 1776 Feb 13 20:15:57.263661 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:15:57.263677 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:15:57.263693 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:15:57.263708 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:15:57.263722 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:15:57.263736 kernel: .... node #0, CPUs: #1 Feb 13 20:15:57.263751 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:15:57.263772 kernel: smpboot: Max logical packages: 1 Feb 13 20:15:57.263793 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Feb 13 20:15:57.263808 kernel: devtmpfs: initialized Feb 13 20:15:57.263824 kernel: x86/mm: Memory block size: 128MB Feb 13 20:15:57.263840 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:15:57.263855 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:15:57.263870 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:15:57.263885 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:15:57.263900 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:15:57.263914 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:15:57.263958 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:15:57.263973 kernel: audit: type=2000 audit(1739477754.974:1): state=initialized audit_enabled=0 res=1 Feb 13 20:15:57.263988 kernel: cpuidle: using governor menu Feb 13 20:15:57.264003 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:15:57.264018 kernel: dca service started, version 1.12.1 Feb 13 20:15:57.264033 kernel: PCI: Using configuration type 1 for base access Feb 13 20:15:57.264047 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:15:57.264063 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:15:57.264079 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:15:57.264099 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:15:57.264114 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:15:57.264129 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:15:57.264143 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:15:57.264158 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:15:57.264172 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:15:57.264187 kernel: ACPI: Interpreter enabled Feb 13 20:15:57.264201 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:15:57.264216 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:15:57.264236 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:15:57.264250 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:15:57.264265 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 20:15:57.264280 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:15:57.264694 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:15:57.264876 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:15:57.265084 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:15:57.265115 kernel: acpiphp: Slot [3] registered Feb 13 20:15:57.265128 kernel: acpiphp: Slot [4] registered Feb 13 20:15:57.265140 kernel: acpiphp: Slot [5] registered Feb 13 20:15:57.265155 kernel: acpiphp: Slot [6] registered Feb 13 20:15:57.265167 kernel: acpiphp: Slot [7] registered Feb 13 20:15:57.265180 kernel: acpiphp: Slot [8] registered Feb 13 20:15:57.265193 kernel: acpiphp: Slot [9] registered Feb 13 20:15:57.265207 kernel: acpiphp: Slot [10] registered Feb 13 20:15:57.265220 kernel: acpiphp: Slot [11] registered Feb 13 20:15:57.265240 kernel: acpiphp: Slot [12] registered Feb 13 20:15:57.265251 kernel: acpiphp: Slot [13] registered Feb 13 20:15:57.265264 kernel: acpiphp: Slot [14] registered Feb 13 20:15:57.265434 kernel: acpiphp: Slot [15] registered Feb 13 20:15:57.265447 kernel: acpiphp: Slot [16] registered Feb 13 20:15:57.265460 kernel: acpiphp: Slot [17] registered Feb 13 20:15:57.265473 kernel: acpiphp: Slot [18] registered Feb 13 20:15:57.265485 kernel: acpiphp: Slot [19] registered Feb 13 20:15:57.265498 kernel: acpiphp: Slot [20] registered Feb 13 20:15:57.265510 kernel: acpiphp: Slot [21] registered Feb 13 20:15:57.265534 kernel: acpiphp: Slot [22] registered Feb 13 20:15:57.265549 kernel: acpiphp: Slot [23] registered Feb 13 20:15:57.265562 kernel: acpiphp: Slot [24] registered Feb 13 20:15:57.265574 kernel: acpiphp: Slot [25] registered Feb 13 20:15:57.265586 kernel: acpiphp: Slot [26] registered Feb 13 20:15:57.265598 kernel: acpiphp: Slot [27] registered Feb 13 20:15:57.265610 kernel: acpiphp: Slot [28] registered Feb 13 20:15:57.265621 kernel: acpiphp: Slot [29] registered Feb 13 20:15:57.265633 kernel: acpiphp: Slot [30] registered Feb 13 20:15:57.265645 kernel: acpiphp: Slot [31] registered Feb 13 20:15:57.265662 kernel: PCI host bridge to bus 0000:00 Feb 13 20:15:57.265914 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:15:57.266110 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:15:57.266259 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:15:57.266409 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 20:15:57.266549 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 13 20:15:57.266685 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:15:57.266953 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:15:57.267165 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 20:15:57.267366 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 20:15:57.267536 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 13 20:15:57.267690 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 20:15:57.267840 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 20:15:57.268042 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 20:15:57.268193 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 20:15:57.268380 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 13 20:15:57.268541 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 13 20:15:57.268753 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:15:57.268905 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 20:15:57.269104 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 20:15:57.269291 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 20:15:57.269450 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 20:15:57.269560 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 13 20:15:57.269671 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 13 20:15:57.269797 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 20:15:57.269915 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:15:57.270144 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:15:57.270508 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 13 20:15:57.270642 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 13 20:15:57.270759 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 13 20:15:57.270901 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:15:57.273315 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 13 20:15:57.273465 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 13 20:15:57.273634 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 13 20:15:57.273804 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 13 20:15:57.273914 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 13 20:15:57.274037 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 13 20:15:57.274140 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 13 20:15:57.274270 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:15:57.276622 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 20:15:57.277141 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 13 20:15:57.277331 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 13 20:15:57.277534 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:15:57.277696 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 13 20:15:57.277861 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 13 20:15:57.278243 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 13 20:15:57.278462 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 20:15:57.278644 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 13 20:15:57.278811 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 13 20:15:57.278834 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:15:57.278852 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:15:57.278868 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:15:57.278885 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:15:57.278902 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:15:57.278926 kernel: iommu: Default domain type: Translated Feb 13 20:15:57.284243 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:15:57.284265 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:15:57.284279 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:15:57.284293 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:15:57.284307 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Feb 13 20:15:57.284753 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 20:15:57.285028 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 20:15:57.285212 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:15:57.285245 kernel: vgaarb: loaded Feb 13 20:15:57.285259 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 20:15:57.285272 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 20:15:57.285285 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:15:57.285298 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:15:57.285315 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:15:57.285331 kernel: pnp: PnP ACPI init Feb 13 20:15:57.285346 kernel: pnp: PnP ACPI: found 4 devices Feb 13 20:15:57.285361 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:15:57.285381 kernel: NET: Registered PF_INET protocol family Feb 13 20:15:57.285396 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:15:57.285412 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:15:57.285428 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:15:57.285443 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:15:57.285781 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:15:57.285804 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:15:57.285819 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:15:57.285846 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:15:57.285861 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:15:57.285876 kernel: NET: Registered PF_XDP protocol family Feb 13 20:15:57.286191 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:15:57.286329 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:15:57.286479 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:15:57.286625 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 20:15:57.286906 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 13 20:15:57.288357 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 20:15:57.289980 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:15:57.290074 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 20:15:57.290472 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 52650 usecs Feb 13 20:15:57.290506 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:15:57.290522 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:15:57.290538 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Feb 13 20:15:57.290552 kernel: Initialise system trusted keyrings Feb 13 20:15:57.290567 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:15:57.290597 kernel: Key type asymmetric registered Feb 13 20:15:57.290611 kernel: Asymmetric key parser 'x509' registered Feb 13 20:15:57.290624 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:15:57.290638 kernel: io scheduler mq-deadline registered Feb 13 20:15:57.290652 kernel: io scheduler kyber registered Feb 13 20:15:57.290666 kernel: io scheduler bfq registered Feb 13 20:15:57.290682 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:15:57.290700 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 20:15:57.290716 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:15:57.290734 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:15:57.290748 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:15:57.290763 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:15:57.290778 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:15:57.290792 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:15:57.290804 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:15:57.292674 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 20:15:57.292724 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:15:57.292892 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 20:15:57.293569 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T20:15:56 UTC (1739477756) Feb 13 20:15:57.293708 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 20:15:57.293725 kernel: intel_pstate: CPU model not supported Feb 13 20:15:57.293738 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:15:57.293753 kernel: Segment Routing with IPv6 Feb 13 20:15:57.293767 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:15:57.293780 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:15:57.293793 kernel: Key type dns_resolver registered Feb 13 20:15:57.293821 kernel: IPI shorthand broadcast: enabled Feb 13 20:15:57.293834 kernel: sched_clock: Marking stable (1352031854, 178015091)->(1614960142, -84913197) Feb 13 20:15:57.293847 kernel: registered taskstats version 1 Feb 13 20:15:57.293860 kernel: Loading compiled-in X.509 certificates Feb 13 20:15:57.293873 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:15:57.293886 kernel: Key type .fscrypt registered Feb 13 20:15:57.293899 kernel: Key type fscrypt-provisioning registered Feb 13 20:15:57.293912 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:15:57.293928 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:15:57.296120 kernel: ima: No architecture policies found Feb 13 20:15:57.296148 kernel: clk: Disabling unused clocks Feb 13 20:15:57.296163 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:15:57.296177 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:15:57.296228 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:15:57.296245 kernel: Run /init as init process Feb 13 20:15:57.296259 kernel: with arguments: Feb 13 20:15:57.296274 kernel: /init Feb 13 20:15:57.296287 kernel: with environment: Feb 13 20:15:57.296305 kernel: HOME=/ Feb 13 20:15:57.296319 kernel: TERM=linux Feb 13 20:15:57.296334 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:15:57.296355 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:15:57.296374 systemd[1]: Detected virtualization kvm. Feb 13 20:15:57.296389 systemd[1]: Detected architecture x86-64. Feb 13 20:15:57.296404 systemd[1]: Running in initrd. Feb 13 20:15:57.296422 systemd[1]: No hostname configured, using default hostname. Feb 13 20:15:57.296436 systemd[1]: Hostname set to . Feb 13 20:15:57.296451 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:15:57.296467 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:15:57.296483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:15:57.296499 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:15:57.296516 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:15:57.296531 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:15:57.296559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:15:57.296572 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:15:57.296589 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:15:57.296602 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:15:57.296616 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:15:57.296632 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:15:57.296646 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:15:57.296667 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:15:57.296682 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:15:57.296701 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:15:57.296720 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:15:57.296734 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:15:57.296752 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:15:57.296767 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:15:57.296782 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:15:57.296796 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:15:57.296810 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:15:57.296824 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:15:57.296838 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:15:57.296852 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:15:57.296866 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:15:57.296884 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:15:57.296898 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:15:57.296912 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:15:57.296926 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:57.298080 systemd-journald[182]: Collecting audit messages is disabled. Feb 13 20:15:57.298131 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:15:57.298146 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:15:57.298161 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:15:57.298177 systemd-journald[182]: Journal started Feb 13 20:15:57.298214 systemd-journald[182]: Runtime Journal (/run/log/journal/f5bff770430c42a79f66b393e2314817) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:15:57.310026 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:15:57.315008 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:15:57.316053 systemd-modules-load[183]: Inserted module 'overlay' Feb 13 20:15:57.409928 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:15:57.410018 kernel: Bridge firewalling registered Feb 13 20:15:57.346460 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:15:57.369199 systemd-modules-load[183]: Inserted module 'br_netfilter' Feb 13 20:15:57.420519 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:15:57.431074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:57.432404 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:15:57.470729 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:57.492688 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:15:57.503541 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:15:57.511315 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:15:57.544361 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:15:57.547404 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:57.552353 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:15:57.561469 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:15:57.577771 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:15:57.601825 dracut-cmdline[216]: dracut-dracut-053 Feb 13 20:15:57.612975 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:57.641378 systemd-resolved[219]: Positive Trust Anchors: Feb 13 20:15:57.641413 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:15:57.641459 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:15:57.648389 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 13 20:15:57.650664 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:15:57.651603 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:15:57.799384 kernel: SCSI subsystem initialized Feb 13 20:15:57.823209 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:15:57.855403 kernel: iscsi: registered transport (tcp) Feb 13 20:15:57.897027 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:15:57.897161 kernel: QLogic iSCSI HBA Driver Feb 13 20:15:58.000658 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:15:58.015346 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:15:58.060162 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:15:58.060348 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:15:58.065250 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:15:58.142054 kernel: raid6: avx2x4 gen() 16091 MB/s Feb 13 20:15:58.161576 kernel: raid6: avx2x2 gen() 15788 MB/s Feb 13 20:15:58.188118 kernel: raid6: avx2x1 gen() 9417 MB/s Feb 13 20:15:58.188267 kernel: raid6: using algorithm avx2x4 gen() 16091 MB/s Feb 13 20:15:58.225917 kernel: raid6: .... xor() 4135 MB/s, rmw enabled Feb 13 20:15:58.226068 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:15:58.275067 kernel: xor: automatically using best checksumming function avx Feb 13 20:15:58.606999 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:15:58.642008 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:15:58.653560 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:15:58.700290 systemd-udevd[402]: Using default interface naming scheme 'v255'. Feb 13 20:15:58.726835 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:15:58.737552 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:15:58.813438 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Feb 13 20:15:58.919074 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:15:58.936771 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:15:59.055496 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:15:59.066346 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:15:59.104275 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:15:59.113423 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:15:59.114449 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:15:59.115250 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:15:59.127403 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:15:59.154217 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:15:59.200000 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Feb 13 20:15:59.419808 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 20:15:59.420098 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:15:59.420328 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:15:59.420348 kernel: libata version 3.00 loaded. Feb 13 20:15:59.420366 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 20:15:59.420620 kernel: ACPI: bus type USB registered Feb 13 20:15:59.420641 kernel: usbcore: registered new interface driver usbfs Feb 13 20:15:59.420671 kernel: scsi host1: ata_piix Feb 13 20:15:59.420900 kernel: usbcore: registered new interface driver hub Feb 13 20:15:59.420921 kernel: usbcore: registered new device driver usb Feb 13 20:15:59.420965 kernel: scsi host2: ata_piix Feb 13 20:15:59.421164 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 13 20:15:59.421186 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 13 20:15:59.421203 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:15:59.421221 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:15:59.421250 kernel: GPT:9289727 != 125829119 Feb 13 20:15:59.421268 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:15:59.421286 kernel: GPT:9289727 != 125829119 Feb 13 20:15:59.421303 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:15:59.421320 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:15:59.421337 kernel: AES CTR mode by8 optimization enabled Feb 13 20:15:59.421357 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Feb 13 20:15:59.433612 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Feb 13 20:15:59.423566 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:15:59.423926 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:59.429109 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:59.430613 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:15:59.431016 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:59.432012 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:59.446924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:59.588190 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:59.624729 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:59.680854 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) Feb 13 20:15:59.687983 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 13 20:15:59.708606 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 13 20:15:59.708929 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 13 20:15:59.709348 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Feb 13 20:15:59.709576 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (464) Feb 13 20:15:59.709599 kernel: hub 1-0:1.0: USB hub found Feb 13 20:15:59.709837 kernel: hub 1-0:1.0: 2 ports detected Feb 13 20:15:59.731913 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:15:59.741714 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:15:59.756168 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:15:59.767570 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:15:59.768712 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:15:59.773774 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:59.780444 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:15:59.842959 disk-uuid[551]: Primary Header is updated. Feb 13 20:15:59.842959 disk-uuid[551]: Secondary Entries is updated. Feb 13 20:15:59.842959 disk-uuid[551]: Secondary Header is updated. Feb 13 20:15:59.862116 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:15:59.873029 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:16:00.898368 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:16:00.901892 disk-uuid[552]: The operation has completed successfully. Feb 13 20:16:01.073791 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:16:01.073964 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:16:01.086813 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:16:01.103299 sh[563]: Success Feb 13 20:16:01.143980 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:16:01.281381 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:16:01.295194 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:16:01.296333 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:16:01.371884 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:16:01.378654 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:16:01.378758 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:16:01.378780 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:16:01.381557 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:16:01.402635 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:16:01.408734 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:16:01.430859 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:16:01.435239 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:16:01.486775 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:16:01.486874 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:16:01.495484 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:16:01.516392 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:16:01.541243 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:16:01.549733 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:16:01.565583 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:16:01.577426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:16:01.910077 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:16:01.924421 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:16:01.997842 systemd-networkd[748]: lo: Link UP Feb 13 20:16:01.997860 systemd-networkd[748]: lo: Gained carrier Feb 13 20:16:02.001640 systemd-networkd[748]: Enumeration completed Feb 13 20:16:02.009297 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:16:02.009305 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 13 20:16:02.018493 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:16:02.019521 systemd[1]: Reached target network.target - Network. Feb 13 20:16:02.024274 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:16:02.024280 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:02.027279 systemd-networkd[748]: eth0: Link UP Feb 13 20:16:02.027286 systemd-networkd[748]: eth0: Gained carrier Feb 13 20:16:02.027303 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:16:02.030930 systemd-networkd[748]: eth1: Link UP Feb 13 20:16:02.035197 ignition[659]: Ignition 2.19.0 Feb 13 20:16:02.030959 systemd-networkd[748]: eth1: Gained carrier Feb 13 20:16:02.035206 ignition[659]: Stage: fetch-offline Feb 13 20:16:02.030978 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:16:02.035251 ignition[659]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:02.039098 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:16:02.035266 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:16:02.035401 ignition[659]: parsed url from cmdline: "" Feb 13 20:16:02.035406 ignition[659]: no config URL provided Feb 13 20:16:02.035415 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:16:02.035425 ignition[659]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:16:02.035431 ignition[659]: failed to fetch config: resource requires networking Feb 13 20:16:02.057382 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:16:02.035689 ignition[659]: Ignition finished successfully Feb 13 20:16:02.078109 systemd-networkd[748]: eth0: DHCPv4 address 147.182.251.87/20, gateway 147.182.240.1 acquired from 169.254.169.253 Feb 13 20:16:02.109919 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.22/20 acquired from 169.254.169.253 Feb 13 20:16:02.168732 ignition[756]: Ignition 2.19.0 Feb 13 20:16:02.168750 ignition[756]: Stage: fetch Feb 13 20:16:02.169248 ignition[756]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:02.169272 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:16:02.174665 ignition[756]: parsed url from cmdline: "" Feb 13 20:16:02.174683 ignition[756]: no config URL provided Feb 13 20:16:02.174697 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:16:02.174957 ignition[756]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:16:02.175021 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 13 20:16:02.219443 ignition[756]: GET result: OK Feb 13 20:16:02.219795 ignition[756]: parsing config with SHA512: 9fadc5a4dc5e874bb6997b24dff3b9fb99895b97dfd0cc7c6356de13d912c6bfa374a1074238ca6818a8eebfdf86e4005d084cb2fabbf76707effcf74b503149 Feb 13 20:16:02.229975 unknown[756]: fetched base config from "system" Feb 13 20:16:02.229988 unknown[756]: fetched base config from "system" Feb 13 20:16:02.229998 unknown[756]: fetched user config from "digitalocean" Feb 13 20:16:02.240067 ignition[756]: fetch: fetch complete Feb 13 20:16:02.240087 ignition[756]: fetch: fetch passed Feb 13 20:16:02.240212 ignition[756]: Ignition finished successfully Feb 13 20:16:02.246311 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:16:02.261995 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:16:02.325793 ignition[764]: Ignition 2.19.0 Feb 13 20:16:02.326000 ignition[764]: Stage: kargs Feb 13 20:16:02.326497 ignition[764]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:02.326517 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:16:02.338014 ignition[764]: kargs: kargs passed Feb 13 20:16:02.338209 ignition[764]: Ignition finished successfully Feb 13 20:16:02.346625 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:16:02.362263 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:16:02.416962 ignition[770]: Ignition 2.19.0 Feb 13 20:16:02.417001 ignition[770]: Stage: disks Feb 13 20:16:02.417378 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:02.425301 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:16:02.417397 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:16:02.422038 ignition[770]: disks: disks passed Feb 13 20:16:02.422151 ignition[770]: Ignition finished successfully Feb 13 20:16:02.444787 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:16:02.446493 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:16:02.447380 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:16:02.448157 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:16:02.449042 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:16:02.465566 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:16:02.537733 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:16:02.556298 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:16:02.575178 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:16:02.829115 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:16:02.828513 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:16:02.831541 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:16:02.845529 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:16:02.877002 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:16:02.885323 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Feb 13 20:16:02.900275 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (786) Feb 13 20:16:02.901268 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:16:02.928122 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:16:02.928165 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:16:02.928184 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:16:02.926465 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:16:02.926517 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:16:02.934823 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:16:02.947061 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:16:02.947411 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:16:02.975249 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:16:03.049645 coreos-metadata[789]: Feb 13 20:16:03.047 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:16:03.078854 coreos-metadata[789]: Feb 13 20:16:03.078 INFO Fetch successful Feb 13 20:16:03.087286 coreos-metadata[788]: Feb 13 20:16:03.086 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:16:03.103984 coreos-metadata[789]: Feb 13 20:16:03.103 INFO wrote hostname ci-4081.3.1-a-865b7d79a6 to /sysroot/etc/hostname Feb 13 20:16:03.111002 coreos-metadata[788]: Feb 13 20:16:03.108 INFO Fetch successful Feb 13 20:16:03.106676 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:16:03.121294 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 13 20:16:03.121440 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Feb 13 20:16:03.129977 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:16:03.147668 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:16:03.157558 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:16:03.164765 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:16:03.361313 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:16:03.370987 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:16:03.377351 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:16:03.394118 systemd-networkd[748]: eth0: Gained IPv6LL Feb 13 20:16:03.403827 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:16:03.407201 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:16:03.508508 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:16:03.546984 ignition[906]: INFO : Ignition 2.19.0 Feb 13 20:16:03.546984 ignition[906]: INFO : Stage: mount Feb 13 20:16:03.546984 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:03.546984 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:16:03.551832 ignition[906]: INFO : mount: mount passed Feb 13 20:16:03.551832 ignition[906]: INFO : Ignition finished successfully Feb 13 20:16:03.555697 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:16:03.581145 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:16:03.841543 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:16:03.889729 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (919) Feb 13 20:16:03.907103 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:16:03.907246 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:16:03.907268 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:16:03.924424 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:16:03.932591 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:16:03.977903 systemd-networkd[748]: eth1: Gained IPv6LL Feb 13 20:16:04.018035 ignition[936]: INFO : Ignition 2.19.0 Feb 13 20:16:04.021628 ignition[936]: INFO : Stage: files Feb 13 20:16:04.021628 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:04.021628 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:16:04.025213 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:16:04.031645 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:16:04.031645 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:16:04.058052 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:16:04.059662 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:16:04.063477 unknown[936]: wrote ssh authorized keys file for user: core Feb 13 20:16:04.072333 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:16:04.072333 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:16:04.072333 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:16:04.366227 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:16:04.520016 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:16:04.520016 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:16:04.520016 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:16:04.520016 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:16:04.520016 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:16:04.520016 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:16:04.520016 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:16:04.520016 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:16:04.552066 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:16:04.552066 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:16:04.552066 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:16:04.552066 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:16:04.552066 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:16:04.552066 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:16:04.552066 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 20:16:04.851595 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:16:05.639781 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:16:05.642122 ignition[936]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:16:05.642122 ignition[936]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:16:05.652047 ignition[936]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:16:05.652047 ignition[936]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:16:05.652047 ignition[936]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:16:05.652047 ignition[936]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:16:05.652047 ignition[936]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:16:05.652047 ignition[936]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:16:05.652047 ignition[936]: INFO : files: files passed Feb 13 20:16:05.652047 ignition[936]: INFO : Ignition finished successfully Feb 13 20:16:05.665618 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:16:05.681390 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:16:05.686237 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:16:05.691981 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:16:05.692152 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:16:05.719837 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:05.719837 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:05.723678 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:05.727743 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:16:05.729333 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:16:05.737303 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:16:05.829403 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:16:05.829600 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:16:05.830734 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:16:05.831501 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:16:05.832304 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:16:05.858418 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:16:05.898216 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:16:05.912649 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:16:05.953113 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:05.958591 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:05.959645 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:16:05.960684 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:16:05.963686 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:16:05.966051 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:16:05.967299 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:16:05.968776 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:16:05.970872 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:16:05.976239 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:16:05.977378 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:16:05.978400 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:16:05.980116 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:16:05.981162 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:16:05.982002 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:16:05.982657 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:16:05.982900 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:16:05.984142 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:05.995779 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:05.998204 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:16:05.998376 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:05.999920 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:16:06.000241 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:16:06.002481 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:16:06.002846 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:16:06.008174 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:16:06.008415 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:16:06.010567 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:16:06.014825 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:16:06.054967 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:16:06.071635 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:16:06.072600 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:16:06.072962 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:06.074092 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:16:06.074334 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:16:06.084800 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:16:06.085008 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:16:06.143187 ignition[989]: INFO : Ignition 2.19.0 Feb 13 20:16:06.145987 ignition[989]: INFO : Stage: umount Feb 13 20:16:06.145987 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:06.145987 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:16:06.154756 ignition[989]: INFO : umount: umount passed Feb 13 20:16:06.154756 ignition[989]: INFO : Ignition finished successfully Feb 13 20:16:06.160627 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:16:06.160862 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:16:06.165616 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:16:06.165837 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:16:06.168749 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:16:06.168887 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:16:06.179414 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:16:06.179548 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:16:06.197592 systemd[1]: Stopped target network.target - Network. Feb 13 20:16:06.198232 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:16:06.198367 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:16:06.199177 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:16:06.199780 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:16:06.207096 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:06.214018 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:16:06.214700 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:16:06.218399 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:16:06.218516 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:16:06.228324 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:16:06.228447 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:16:06.229275 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:16:06.229363 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:16:06.230111 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:16:06.230173 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:16:06.231189 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:16:06.232092 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:16:06.234623 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:16:06.235568 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:16:06.235778 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:16:06.237679 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:16:06.237839 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:16:06.253387 systemd-networkd[748]: eth1: DHCPv6 lease lost Feb 13 20:16:06.274130 systemd-networkd[748]: eth0: DHCPv6 lease lost Feb 13 20:16:06.278452 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:16:06.278806 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:16:06.283248 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:16:06.283398 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:06.291518 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:16:06.291781 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:16:06.294144 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:16:06.294295 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:06.316900 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:16:06.318306 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:16:06.318441 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:16:06.319396 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:16:06.319488 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:06.320282 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:16:06.320365 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:06.321306 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:06.361169 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:16:06.362250 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:16:06.365048 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:16:06.365318 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:06.373098 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:16:06.373281 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:06.374230 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:16:06.374324 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:06.375122 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:16:06.375229 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:16:06.376416 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:16:06.376521 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:16:06.377436 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:16:06.377531 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:06.400517 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:16:06.404066 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:16:06.404219 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:06.406119 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:16:06.406233 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:16:06.408741 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:16:06.408841 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:06.411918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:06.412083 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:06.425791 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:16:06.426001 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:16:06.428655 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:16:06.435832 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:16:06.457046 systemd[1]: Switching root. Feb 13 20:16:06.645208 systemd-journald[182]: Journal stopped Feb 13 20:16:08.860756 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Feb 13 20:16:08.860886 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:16:08.860912 kernel: SELinux: policy capability open_perms=1 Feb 13 20:16:08.865122 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:16:08.865182 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:16:08.865203 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:16:08.865231 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:16:08.865259 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:16:08.865279 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:16:08.865298 kernel: audit: type=1403 audit(1739477767.062:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:16:08.865329 systemd[1]: Successfully loaded SELinux policy in 88.934ms. Feb 13 20:16:08.865359 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.802ms. Feb 13 20:16:08.865384 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:16:08.865406 systemd[1]: Detected virtualization kvm. Feb 13 20:16:08.865429 systemd[1]: Detected architecture x86-64. Feb 13 20:16:08.865454 systemd[1]: Detected first boot. Feb 13 20:16:08.865475 systemd[1]: Hostname set to . Feb 13 20:16:08.865496 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:16:08.865517 zram_generator::config[1031]: No configuration found. Feb 13 20:16:08.865541 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:16:08.865563 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:16:08.865585 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:16:08.865606 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:16:08.865643 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:16:08.865666 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:16:08.865688 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:16:08.865708 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:16:08.865729 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:16:08.865750 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:16:08.865772 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:16:08.865793 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:16:08.865817 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:08.865838 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:08.865859 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:16:08.865880 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:16:08.868114 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:16:08.868144 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:16:08.868164 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:16:08.868183 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:08.868202 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:16:08.868234 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:16:08.868253 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:16:08.868272 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:16:08.868315 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:08.868333 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:16:08.868351 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:16:08.868369 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:16:08.868394 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:16:08.868411 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:16:08.868430 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:08.868449 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:08.868468 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:08.868487 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:16:08.868505 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:16:08.868524 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:16:08.868542 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:16:08.868566 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:08.868597 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:16:08.868616 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:16:08.868635 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:16:08.868672 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:16:08.868691 systemd[1]: Reached target machines.target - Containers. Feb 13 20:16:08.868711 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:16:08.868732 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:08.868754 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:16:08.868772 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:16:08.868789 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:08.868807 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:16:08.868829 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:08.868851 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:16:08.868871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:08.868893 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:16:08.868913 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:16:08.872019 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:16:08.872074 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:16:08.872094 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:16:08.872112 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:16:08.872130 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:16:08.872149 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:16:08.872188 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:16:08.872241 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:16:08.872260 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:16:08.872294 systemd[1]: Stopped verity-setup.service. Feb 13 20:16:08.872314 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:08.872332 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:16:08.872353 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:16:08.872370 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:16:08.872390 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:16:08.872413 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:16:08.872431 kernel: loop: module loaded Feb 13 20:16:08.872451 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:16:08.872468 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:08.872488 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:16:08.872505 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:16:08.872593 systemd-journald[1107]: Collecting audit messages is disabled. Feb 13 20:16:08.872649 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:08.872669 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:08.872689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:08.872709 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:08.872734 systemd-journald[1107]: Journal started Feb 13 20:16:08.872802 systemd-journald[1107]: Runtime Journal (/run/log/journal/f5bff770430c42a79f66b393e2314817) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:16:08.234890 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:16:08.274729 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:16:08.877049 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:16:08.275552 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:16:08.875707 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:08.877071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:08.879169 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:08.881294 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:16:08.884835 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:16:08.912229 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:16:08.924422 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:16:08.927485 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:16:08.927561 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:16:08.932009 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:16:08.944197 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:16:08.956035 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:16:08.957263 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:08.966236 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:16:08.973295 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:16:08.975047 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:16:08.980291 kernel: fuse: init (API version 7.39) Feb 13 20:16:08.982618 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:16:08.984503 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:16:08.989342 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:16:08.995307 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:16:09.025415 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:16:09.032427 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:16:09.036196 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:16:09.036895 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:16:09.051614 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:16:09.055072 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:16:09.101075 kernel: loop0: detected capacity change from 0 to 205544 Feb 13 20:16:09.112099 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:16:09.125499 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:16:09.156688 systemd-journald[1107]: Time spent on flushing to /var/log/journal/f5bff770430c42a79f66b393e2314817 is 232.465ms for 986 entries. Feb 13 20:16:09.156688 systemd-journald[1107]: System Journal (/var/log/journal/f5bff770430c42a79f66b393e2314817) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:16:09.456786 systemd-journald[1107]: Received client request to flush runtime journal. Feb 13 20:16:09.456879 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:16:09.456905 kernel: ACPI: bus type drm_connector registered Feb 13 20:16:09.456927 kernel: loop1: detected capacity change from 0 to 142488 Feb 13 20:16:09.457263 kernel: loop2: detected capacity change from 0 to 8 Feb 13 20:16:09.157181 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:16:09.206544 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:16:09.213649 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:16:09.267191 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:16:09.267537 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:16:09.322346 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:09.333255 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:16:09.366210 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:16:09.405346 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:09.418879 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Feb 13 20:16:09.418901 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Feb 13 20:16:09.423588 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:16:09.425052 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:16:09.440862 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:16:09.449577 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:16:09.467478 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:16:09.478084 kernel: loop3: detected capacity change from 0 to 140768 Feb 13 20:16:09.549777 kernel: loop4: detected capacity change from 0 to 205544 Feb 13 20:16:09.580180 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:16:09.594324 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:16:09.602644 kernel: loop5: detected capacity change from 0 to 142488 Feb 13 20:16:09.638979 kernel: loop6: detected capacity change from 0 to 8 Feb 13 20:16:09.646979 kernel: loop7: detected capacity change from 0 to 140768 Feb 13 20:16:09.669798 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Feb 13 20:16:09.670606 (sd-merge)[1175]: Merged extensions into '/usr'. Feb 13 20:16:09.673510 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Feb 13 20:16:09.673529 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Feb 13 20:16:09.690672 systemd[1]: Reloading requested from client PID 1147 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:16:09.690724 systemd[1]: Reloading... Feb 13 20:16:09.975352 zram_generator::config[1206]: No configuration found. Feb 13 20:16:10.369033 ldconfig[1142]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:16:10.549177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:10.672556 systemd[1]: Reloading finished in 981 ms. Feb 13 20:16:10.711525 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:16:10.714500 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:16:10.717404 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:10.736205 systemd[1]: Starting ensure-sysext.service... Feb 13 20:16:10.756285 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:16:10.806426 systemd[1]: Reloading requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:16:10.806467 systemd[1]: Reloading... Feb 13 20:16:10.919769 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:16:10.924976 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:16:10.927397 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:16:10.927823 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Feb 13 20:16:10.927927 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Feb 13 20:16:10.977018 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:16:10.977036 systemd-tmpfiles[1251]: Skipping /boot Feb 13 20:16:10.993700 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:16:10.993712 systemd-tmpfiles[1251]: Skipping /boot Feb 13 20:16:11.129119 zram_generator::config[1278]: No configuration found. Feb 13 20:16:11.383431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:11.480516 systemd[1]: Reloading finished in 673 ms. Feb 13 20:16:11.514569 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:16:11.521030 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:11.611257 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:16:11.620827 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:16:11.625314 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:16:11.658588 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:16:11.672376 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:11.683237 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:16:11.692749 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:11.695383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:11.704574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:11.708552 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:11.727370 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:11.728321 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:11.728489 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:11.733807 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:11.735323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:11.735568 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:11.745618 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:16:11.747088 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:11.755616 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:11.755842 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:11.758284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:11.758721 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:11.760715 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:11.760887 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:11.772381 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:11.773364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:11.781445 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:16:11.782578 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:11.782824 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:16:11.783103 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:16:11.783228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:11.787051 systemd[1]: Finished ensure-sysext.service. Feb 13 20:16:11.800012 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:16:11.803683 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:16:11.830587 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:16:11.830862 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:16:11.832754 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:16:11.851395 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Feb 13 20:16:11.852236 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:16:11.867030 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:16:11.872349 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:16:11.911435 augenrules[1359]: No rules Feb 13 20:16:11.906680 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:16:11.915785 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:16:11.924861 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:11.936311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:16:11.942169 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:16:12.137758 systemd-networkd[1367]: lo: Link UP Feb 13 20:16:12.140813 systemd-networkd[1367]: lo: Gained carrier Feb 13 20:16:12.145880 systemd-networkd[1367]: Enumeration completed Feb 13 20:16:12.146227 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:16:12.158358 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:16:12.205152 systemd-resolved[1328]: Positive Trust Anchors: Feb 13 20:16:12.205175 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:16:12.205226 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:16:12.215610 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:16:12.219889 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:16:12.223141 systemd-resolved[1328]: Using system hostname 'ci-4081.3.1-a-865b7d79a6'. Feb 13 20:16:12.229195 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:16:12.231177 systemd[1]: Reached target network.target - Network. Feb 13 20:16:12.231874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:12.236454 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:16:12.347067 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1382) Feb 13 20:16:12.374167 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Feb 13 20:16:12.376405 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:12.376757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:12.389006 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:12.403798 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:12.410306 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:12.412226 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:12.412290 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:16:12.412340 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:12.436165 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:12.436461 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:12.449371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:12.451665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:12.463005 kernel: ISO 9660 Extensions: RRIP_1991A Feb 13 20:16:12.469569 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:16:12.472101 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Feb 13 20:16:12.476563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:12.477763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:12.481709 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:16:12.500194 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:16:12.516352 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:16:12.557119 systemd-networkd[1367]: eth1: Configuring with /run/systemd/network/10-be:70:92:52:e4:8c.network. Feb 13 20:16:12.559357 systemd-networkd[1367]: eth1: Link UP Feb 13 20:16:12.559371 systemd-networkd[1367]: eth1: Gained carrier Feb 13 20:16:12.566616 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Feb 13 20:16:12.575710 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:16:12.578901 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 20:16:12.585992 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:16:12.595994 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 20:16:12.599678 systemd-networkd[1367]: eth0: Configuring with /run/systemd/network/10-26:bb:6b:4e:9e:9f.network. Feb 13 20:16:12.602080 systemd-networkd[1367]: eth0: Link UP Feb 13 20:16:12.602088 systemd-networkd[1367]: eth0: Gained carrier Feb 13 20:16:12.648883 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 20:16:12.722541 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:16:12.726565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:12.788007 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 20:16:12.788167 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 20:16:12.799951 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:16:12.800078 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 20:16:12.800145 kernel: [drm] features: -context_init Feb 13 20:16:12.802980 kernel: [drm] number of scanouts: 1 Feb 13 20:16:12.806972 kernel: [drm] number of cap sets: 0 Feb 13 20:16:12.847859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:12.848309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:12.866215 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 20:16:12.865872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:12.871991 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 20:16:12.877601 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:16:12.902758 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 20:16:12.913633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:12.914166 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:12.937342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:13.033334 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:16:13.088600 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:16:13.100329 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:16:13.179375 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:16:13.180112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:13.210587 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:16:13.212485 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:13.214135 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:16:13.215851 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:16:13.216039 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:16:13.216408 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:16:13.216752 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:16:13.216884 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:16:13.217013 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:16:13.217043 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:16:13.217099 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:16:13.220558 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:16:13.224458 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:16:13.250915 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:16:13.254816 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:16:13.256165 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:16:13.258150 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:16:13.258911 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:16:13.259726 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:16:13.259771 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:16:13.277715 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:16:13.289318 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:16:13.296285 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:16:13.306246 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:16:13.318775 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:16:13.340181 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:16:13.341016 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:16:13.348277 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:16:13.352074 jq[1440]: false Feb 13 20:16:13.362418 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:16:13.389594 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:16:13.404280 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:16:13.413698 coreos-metadata[1438]: Feb 13 20:16:13.413 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:16:13.422568 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:16:13.427220 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:16:13.434167 coreos-metadata[1438]: Feb 13 20:16:13.427 INFO Fetch successful Feb 13 20:16:13.429810 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:16:13.437337 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:16:13.447261 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:16:13.455056 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:16:13.462635 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:16:13.464194 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:16:13.471490 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:16:13.471805 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:16:13.499405 dbus-daemon[1439]: [system] SELinux support is enabled Feb 13 20:16:13.502843 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:16:13.511116 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:16:13.514330 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:16:13.514377 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:16:13.515340 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:16:13.515457 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Feb 13 20:16:13.515486 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:16:13.547800 jq[1451]: true Feb 13 20:16:13.560038 update_engine[1449]: I20250213 20:16:13.557631 1449 main.cc:92] Flatcar Update Engine starting Feb 13 20:16:13.580263 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:16:13.593204 tar[1454]: linux-amd64/helm Feb 13 20:16:13.600277 update_engine[1449]: I20250213 20:16:13.579595 1449 update_check_scheduler.cc:74] Next update check in 7m38s Feb 13 20:16:13.597332 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:16:13.600474 extend-filesystems[1443]: Found loop4 Feb 13 20:16:13.600474 extend-filesystems[1443]: Found loop5 Feb 13 20:16:13.600474 extend-filesystems[1443]: Found loop6 Feb 13 20:16:13.600474 extend-filesystems[1443]: Found loop7 Feb 13 20:16:13.600474 extend-filesystems[1443]: Found vda Feb 13 20:16:13.600474 extend-filesystems[1443]: Found vda1 Feb 13 20:16:13.600474 extend-filesystems[1443]: Found vda2 Feb 13 20:16:13.600474 extend-filesystems[1443]: Found vda3 Feb 13 20:16:13.600474 extend-filesystems[1443]: Found usr Feb 13 20:16:13.600474 extend-filesystems[1443]: Found vda4 Feb 13 20:16:13.600474 extend-filesystems[1443]: Found vda6 Feb 13 20:16:13.600474 extend-filesystems[1443]: Found vda7 Feb 13 20:16:13.600474 extend-filesystems[1443]: Found vda9 Feb 13 20:16:13.600474 extend-filesystems[1443]: Checking size of /dev/vda9 Feb 13 20:16:13.724431 jq[1473]: true Feb 13 20:16:13.726203 extend-filesystems[1443]: Resized partition /dev/vda9 Feb 13 20:16:13.616191 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:16:13.622278 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:16:13.743407 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:16:13.625636 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:16:13.645697 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:16:13.663838 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:16:13.782393 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 13 20:16:13.810122 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1376) Feb 13 20:16:13.964021 systemd-logind[1448]: New seat seat0. Feb 13 20:16:14.017880 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:16:14.017918 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:16:14.021843 systemd-networkd[1367]: eth0: Gained IPv6LL Feb 13 20:16:14.035606 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:16:14.070842 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:16:14.085868 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:16:14.105334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:14.118525 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:16:14.154225 systemd-networkd[1367]: eth1: Gained IPv6LL Feb 13 20:16:14.165539 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 20:16:14.212275 extend-filesystems[1485]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:16:14.212275 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 20:16:14.212275 extend-filesystems[1485]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 20:16:14.220843 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:16:14.265411 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:16:14.267319 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Feb 13 20:16:14.267319 extend-filesystems[1443]: Found vdb Feb 13 20:16:14.223357 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:16:14.251402 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:16:14.301346 systemd[1]: Starting sshkeys.service... Feb 13 20:16:14.395735 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:16:14.418053 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:16:14.422112 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:16:14.530604 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:16:14.570415 containerd[1472]: time="2025-02-13T20:16:14.567420747Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:16:14.578289 coreos-metadata[1526]: Feb 13 20:16:14.577 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:16:14.598532 coreos-metadata[1526]: Feb 13 20:16:14.597 INFO Fetch successful Feb 13 20:16:14.634499 unknown[1526]: wrote ssh authorized keys file for user: core Feb 13 20:16:14.787048 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:16:14.792081 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:16:14.799362 containerd[1472]: time="2025-02-13T20:16:14.799067250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:14.809601 systemd[1]: Finished sshkeys.service. Feb 13 20:16:14.823757 containerd[1472]: time="2025-02-13T20:16:14.823679012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:14.824537 containerd[1472]: time="2025-02-13T20:16:14.823918273Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:16:14.824537 containerd[1472]: time="2025-02-13T20:16:14.823974485Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:16:14.824537 containerd[1472]: time="2025-02-13T20:16:14.824252956Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:16:14.824537 containerd[1472]: time="2025-02-13T20:16:14.824308646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:14.824537 containerd[1472]: time="2025-02-13T20:16:14.824417087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:14.824537 containerd[1472]: time="2025-02-13T20:16:14.824441896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:14.827744 containerd[1472]: time="2025-02-13T20:16:14.827675375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:14.827891 containerd[1472]: time="2025-02-13T20:16:14.827874526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:14.830383 containerd[1472]: time="2025-02-13T20:16:14.829316558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:14.830383 containerd[1472]: time="2025-02-13T20:16:14.829695162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:14.830383 containerd[1472]: time="2025-02-13T20:16:14.829962106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:14.830383 containerd[1472]: time="2025-02-13T20:16:14.830315164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:14.834152 containerd[1472]: time="2025-02-13T20:16:14.833134035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:14.834152 containerd[1472]: time="2025-02-13T20:16:14.833193155Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:16:14.834152 containerd[1472]: time="2025-02-13T20:16:14.833401609Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:16:14.834152 containerd[1472]: time="2025-02-13T20:16:14.833477283Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:16:14.857479 containerd[1472]: time="2025-02-13T20:16:14.857263690Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:16:14.857479 containerd[1472]: time="2025-02-13T20:16:14.857385553Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:16:14.857479 containerd[1472]: time="2025-02-13T20:16:14.857412712Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:16:14.857479 containerd[1472]: time="2025-02-13T20:16:14.857440983Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:16:14.858704 containerd[1472]: time="2025-02-13T20:16:14.858077022Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:16:14.858704 containerd[1472]: time="2025-02-13T20:16:14.858416617Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:16:14.859782 containerd[1472]: time="2025-02-13T20:16:14.858819709Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:16:14.859782 containerd[1472]: time="2025-02-13T20:16:14.859061554Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:16:14.859782 containerd[1472]: time="2025-02-13T20:16:14.859082779Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:16:14.859782 containerd[1472]: time="2025-02-13T20:16:14.859096930Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:16:14.859782 containerd[1472]: time="2025-02-13T20:16:14.859112448Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:16:14.859782 containerd[1472]: time="2025-02-13T20:16:14.859126391Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:16:14.861043 containerd[1472]: time="2025-02-13T20:16:14.859138894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:16:14.861177 containerd[1472]: time="2025-02-13T20:16:14.861066253Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:16:14.861177 containerd[1472]: time="2025-02-13T20:16:14.861096866Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:16:14.861177 containerd[1472]: time="2025-02-13T20:16:14.861155193Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:16:14.861294 containerd[1472]: time="2025-02-13T20:16:14.861184113Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:16:14.861294 containerd[1472]: time="2025-02-13T20:16:14.861202339Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:16:14.861294 containerd[1472]: time="2025-02-13T20:16:14.861254925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861294 containerd[1472]: time="2025-02-13T20:16:14.861278099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861429 containerd[1472]: time="2025-02-13T20:16:14.861308771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861429 containerd[1472]: time="2025-02-13T20:16:14.861326951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861429 containerd[1472]: time="2025-02-13T20:16:14.861388574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861429 containerd[1472]: time="2025-02-13T20:16:14.861408550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861429 containerd[1472]: time="2025-02-13T20:16:14.861420965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861836 containerd[1472]: time="2025-02-13T20:16:14.861434119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861836 containerd[1472]: time="2025-02-13T20:16:14.861462096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861836 containerd[1472]: time="2025-02-13T20:16:14.861480094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861836 containerd[1472]: time="2025-02-13T20:16:14.861508929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861836 containerd[1472]: time="2025-02-13T20:16:14.861522334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861836 containerd[1472]: time="2025-02-13T20:16:14.861557088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.861836 containerd[1472]: time="2025-02-13T20:16:14.861579178Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:16:14.862102 containerd[1472]: time="2025-02-13T20:16:14.861865332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.862102 containerd[1472]: time="2025-02-13T20:16:14.861895651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.862102 containerd[1472]: time="2025-02-13T20:16:14.861926745Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:16:14.862102 containerd[1472]: time="2025-02-13T20:16:14.862027098Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:16:14.862102 containerd[1472]: time="2025-02-13T20:16:14.862055081Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:16:14.862102 containerd[1472]: time="2025-02-13T20:16:14.862092470Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:16:14.862268 containerd[1472]: time="2025-02-13T20:16:14.862111127Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:16:14.862268 containerd[1472]: time="2025-02-13T20:16:14.862125131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.862268 containerd[1472]: time="2025-02-13T20:16:14.862191160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:16:14.862268 containerd[1472]: time="2025-02-13T20:16:14.862212912Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:16:14.862406 containerd[1472]: time="2025-02-13T20:16:14.862272102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:16:14.866546 containerd[1472]: time="2025-02-13T20:16:14.866244079Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:16:14.866546 containerd[1472]: time="2025-02-13T20:16:14.866385611Z" level=info msg="Connect containerd service" Feb 13 20:16:14.866546 containerd[1472]: time="2025-02-13T20:16:14.866447487Z" level=info msg="using legacy CRI server" Feb 13 20:16:14.866546 containerd[1472]: time="2025-02-13T20:16:14.866458575Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:16:14.868988 containerd[1472]: time="2025-02-13T20:16:14.866893811Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:16:14.872988 containerd[1472]: time="2025-02-13T20:16:14.870110768Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:16:14.872988 containerd[1472]: time="2025-02-13T20:16:14.871780763Z" level=info msg="Start subscribing containerd event" Feb 13 20:16:14.872988 containerd[1472]: time="2025-02-13T20:16:14.871877342Z" level=info msg="Start recovering state" Feb 13 20:16:14.873242 containerd[1472]: time="2025-02-13T20:16:14.873054765Z" level=info msg="Start event monitor" Feb 13 20:16:14.873242 containerd[1472]: time="2025-02-13T20:16:14.873110030Z" level=info msg="Start snapshots syncer" Feb 13 20:16:14.873242 containerd[1472]: time="2025-02-13T20:16:14.873128298Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:16:14.873242 containerd[1472]: time="2025-02-13T20:16:14.873144578Z" level=info msg="Start streaming server" Feb 13 20:16:14.878050 containerd[1472]: time="2025-02-13T20:16:14.877988331Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:16:14.879230 containerd[1472]: time="2025-02-13T20:16:14.879179740Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:16:14.883742 containerd[1472]: time="2025-02-13T20:16:14.883684767Z" level=info msg="containerd successfully booted in 0.322580s" Feb 13 20:16:14.883850 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:16:15.277861 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:16:15.339622 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:16:15.355383 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:16:15.424418 systemd[1]: Started sshd@0-147.182.251.87:22-147.75.109.163:53330.service - OpenSSH per-connection server daemon (147.75.109.163:53330). Feb 13 20:16:15.438436 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:16:15.438788 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:16:15.458665 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:16:15.546906 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:16:15.567337 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:16:15.588896 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:16:15.594516 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:16:15.629716 sshd[1547]: Accepted publickey for core from 147.75.109.163 port 53330 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:15.640437 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:15.671564 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:16:15.682317 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:16:15.691209 systemd-logind[1448]: New session 1 of user core. Feb 13 20:16:15.737912 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:16:15.763641 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:16:15.805221 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:16:15.891420 tar[1454]: linux-amd64/LICENSE Feb 13 20:16:15.891420 tar[1454]: linux-amd64/README.md Feb 13 20:16:15.942769 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:16:16.026283 systemd[1559]: Queued start job for default target default.target. Feb 13 20:16:16.035440 systemd[1559]: Created slice app.slice - User Application Slice. Feb 13 20:16:16.035497 systemd[1559]: Reached target paths.target - Paths. Feb 13 20:16:16.035518 systemd[1559]: Reached target timers.target - Timers. Feb 13 20:16:16.041691 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:16:16.083627 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:16:16.085049 systemd[1559]: Reached target sockets.target - Sockets. Feb 13 20:16:16.085128 systemd[1559]: Reached target basic.target - Basic System. Feb 13 20:16:16.086036 systemd[1559]: Reached target default.target - Main User Target. Feb 13 20:16:16.086095 systemd[1559]: Startup finished in 255ms. Feb 13 20:16:16.089208 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:16:16.106270 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:16:16.197713 systemd[1]: Started sshd@1-147.182.251.87:22-147.75.109.163:53334.service - OpenSSH per-connection server daemon (147.75.109.163:53334). Feb 13 20:16:16.328490 sshd[1573]: Accepted publickey for core from 147.75.109.163 port 53334 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:16.334585 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:16.351369 systemd-logind[1448]: New session 2 of user core. Feb 13 20:16:16.358275 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:16:16.463355 sshd[1573]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:16.479503 systemd[1]: sshd@1-147.182.251.87:22-147.75.109.163:53334.service: Deactivated successfully. Feb 13 20:16:16.483338 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:16:16.490809 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:16:16.496863 systemd[1]: Started sshd@2-147.182.251.87:22-147.75.109.163:53340.service - OpenSSH per-connection server daemon (147.75.109.163:53340). Feb 13 20:16:16.505278 systemd-logind[1448]: Removed session 2. Feb 13 20:16:16.604703 sshd[1580]: Accepted publickey for core from 147.75.109.163 port 53340 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:16.608728 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:16.638748 systemd-logind[1448]: New session 3 of user core. Feb 13 20:16:16.647855 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:16:16.740585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:16.742796 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:16:16.746532 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:16.764315 systemd[1]: Startup finished in 1.548s (kernel) + 10.243s (initrd) + 9.782s (userspace) = 21.574s. Feb 13 20:16:16.834253 sshd[1580]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:16.844662 systemd[1]: sshd@2-147.182.251.87:22-147.75.109.163:53340.service: Deactivated successfully. Feb 13 20:16:16.847654 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:16:16.854101 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:16:16.856392 systemd-logind[1448]: Removed session 3. Feb 13 20:16:17.838650 kubelet[1587]: E0213 20:16:17.838530 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:17.841912 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:17.842137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:17.842801 systemd[1]: kubelet.service: Consumed 1.545s CPU time. Feb 13 20:16:18.974542 systemd-timesyncd[1348]: Contacted time server 75.72.171.171:123 (1.flatcar.pool.ntp.org). Feb 13 20:16:18.974647 systemd-timesyncd[1348]: Initial clock synchronization to Thu 2025-02-13 20:16:18.901479 UTC. Feb 13 20:16:26.813912 systemd[1]: Started sshd@3-147.182.251.87:22-147.75.109.163:38114.service - OpenSSH per-connection server daemon (147.75.109.163:38114). Feb 13 20:16:26.898967 sshd[1603]: Accepted publickey for core from 147.75.109.163 port 38114 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:26.902452 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:26.913393 systemd-logind[1448]: New session 4 of user core. Feb 13 20:16:26.924337 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:16:27.023185 sshd[1603]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:27.036446 systemd[1]: sshd@3-147.182.251.87:22-147.75.109.163:38114.service: Deactivated successfully. Feb 13 20:16:27.041455 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:16:27.047789 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:16:27.097438 systemd[1]: Started sshd@4-147.182.251.87:22-147.75.109.163:38118.service - OpenSSH per-connection server daemon (147.75.109.163:38118). Feb 13 20:16:27.103409 systemd-logind[1448]: Removed session 4. Feb 13 20:16:27.206505 sshd[1610]: Accepted publickey for core from 147.75.109.163 port 38118 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:27.208909 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:27.229666 systemd-logind[1448]: New session 5 of user core. Feb 13 20:16:27.243399 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:16:27.317306 sshd[1610]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:27.330720 systemd[1]: sshd@4-147.182.251.87:22-147.75.109.163:38118.service: Deactivated successfully. Feb 13 20:16:27.334129 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:16:27.338431 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:16:27.349626 systemd[1]: Started sshd@5-147.182.251.87:22-147.75.109.163:38130.service - OpenSSH per-connection server daemon (147.75.109.163:38130). Feb 13 20:16:27.356040 systemd-logind[1448]: Removed session 5. Feb 13 20:16:27.412771 sshd[1617]: Accepted publickey for core from 147.75.109.163 port 38130 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:27.415310 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:27.434398 systemd-logind[1448]: New session 6 of user core. Feb 13 20:16:27.436508 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:16:27.527451 sshd[1617]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:27.538903 systemd[1]: sshd@5-147.182.251.87:22-147.75.109.163:38130.service: Deactivated successfully. Feb 13 20:16:27.553385 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:16:27.557268 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:16:27.566423 systemd[1]: Started sshd@6-147.182.251.87:22-147.75.109.163:38136.service - OpenSSH per-connection server daemon (147.75.109.163:38136). Feb 13 20:16:27.572023 systemd-logind[1448]: Removed session 6. Feb 13 20:16:27.641132 sshd[1624]: Accepted publickey for core from 147.75.109.163 port 38136 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:27.644697 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:27.656858 systemd-logind[1448]: New session 7 of user core. Feb 13 20:16:27.678455 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:16:27.780346 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:16:27.780829 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:16:28.006529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:16:28.035462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:28.442340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:28.472166 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:28.797645 kubelet[1644]: E0213 20:16:28.797507 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:28.804843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:28.805357 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:29.065247 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:16:29.072986 (dockerd)[1658]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:16:30.014346 dockerd[1658]: time="2025-02-13T20:16:30.014263650Z" level=info msg="Starting up" Feb 13 20:16:30.290202 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1601398993-merged.mount: Deactivated successfully. Feb 13 20:16:30.345059 dockerd[1658]: time="2025-02-13T20:16:30.344328780Z" level=info msg="Loading containers: start." Feb 13 20:16:30.649389 kernel: Initializing XFRM netlink socket Feb 13 20:16:30.888700 systemd-networkd[1367]: docker0: Link UP Feb 13 20:16:30.931062 dockerd[1658]: time="2025-02-13T20:16:30.930918782Z" level=info msg="Loading containers: done." Feb 13 20:16:30.985709 dockerd[1658]: time="2025-02-13T20:16:30.984096445Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:16:30.985709 dockerd[1658]: time="2025-02-13T20:16:30.984290349Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:16:30.985709 dockerd[1658]: time="2025-02-13T20:16:30.984488894Z" level=info msg="Daemon has completed initialization" Feb 13 20:16:31.096822 dockerd[1658]: time="2025-02-13T20:16:31.096209451Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:16:31.098304 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:16:32.441141 containerd[1472]: time="2025-02-13T20:16:32.440820119Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 20:16:32.451534 systemd-resolved[1328]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Feb 13 20:16:33.282323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181382582.mount: Deactivated successfully. Feb 13 20:16:35.521182 systemd-resolved[1328]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Feb 13 20:16:36.262680 containerd[1472]: time="2025-02-13T20:16:36.262563232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:36.265320 containerd[1472]: time="2025-02-13T20:16:36.265092697Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976588" Feb 13 20:16:36.267921 containerd[1472]: time="2025-02-13T20:16:36.267685219Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:36.279990 containerd[1472]: time="2025-02-13T20:16:36.278533580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:36.280549 containerd[1472]: time="2025-02-13T20:16:36.279927890Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 3.839045036s" Feb 13 20:16:36.280698 containerd[1472]: time="2025-02-13T20:16:36.280671322Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 20:16:36.282962 containerd[1472]: time="2025-02-13T20:16:36.282884479Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 20:16:38.997672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:16:39.014896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:39.305297 containerd[1472]: time="2025-02-13T20:16:39.305110803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:39.312304 containerd[1472]: time="2025-02-13T20:16:39.308612276Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708193" Feb 13 20:16:39.312304 containerd[1472]: time="2025-02-13T20:16:39.309401548Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:39.316186 containerd[1472]: time="2025-02-13T20:16:39.316127599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:39.318739 containerd[1472]: time="2025-02-13T20:16:39.318639001Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 3.035483047s" Feb 13 20:16:39.324395 containerd[1472]: time="2025-02-13T20:16:39.320855263Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 20:16:39.328021 containerd[1472]: time="2025-02-13T20:16:39.327961526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 20:16:39.333132 systemd-resolved[1328]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Feb 13 20:16:39.342327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:39.353659 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:39.483727 kubelet[1871]: E0213 20:16:39.483556 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:39.487558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:39.487772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:41.065579 containerd[1472]: time="2025-02-13T20:16:41.065442873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:41.068041 containerd[1472]: time="2025-02-13T20:16:41.067261150Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652425" Feb 13 20:16:41.069416 containerd[1472]: time="2025-02-13T20:16:41.069354773Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:41.077912 containerd[1472]: time="2025-02-13T20:16:41.077215680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:41.080700 containerd[1472]: time="2025-02-13T20:16:41.080620369Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.752604703s" Feb 13 20:16:41.080700 containerd[1472]: time="2025-02-13T20:16:41.080696160Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 20:16:41.081629 containerd[1472]: time="2025-02-13T20:16:41.081595349Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 20:16:42.820074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307544569.mount: Deactivated successfully. Feb 13 20:16:44.225359 containerd[1472]: time="2025-02-13T20:16:44.223009817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:44.229129 containerd[1472]: time="2025-02-13T20:16:44.229031334Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 20:16:44.230995 containerd[1472]: time="2025-02-13T20:16:44.230890248Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:44.235017 containerd[1472]: time="2025-02-13T20:16:44.233973037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:44.235972 containerd[1472]: time="2025-02-13T20:16:44.235668255Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 3.154026555s" Feb 13 20:16:44.235972 containerd[1472]: time="2025-02-13T20:16:44.235749569Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 20:16:44.237018 containerd[1472]: time="2025-02-13T20:16:44.236587893Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:16:44.987345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161832842.mount: Deactivated successfully. Feb 13 20:16:47.503348 containerd[1472]: time="2025-02-13T20:16:47.503270136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:47.507969 containerd[1472]: time="2025-02-13T20:16:47.506771000Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 20:16:47.511611 containerd[1472]: time="2025-02-13T20:16:47.511487220Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:47.516668 containerd[1472]: time="2025-02-13T20:16:47.516546005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:47.520447 containerd[1472]: time="2025-02-13T20:16:47.520215212Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.28357236s" Feb 13 20:16:47.520447 containerd[1472]: time="2025-02-13T20:16:47.520307876Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:16:47.521968 containerd[1472]: time="2025-02-13T20:16:47.521154709Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:16:48.144544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727207940.mount: Deactivated successfully. Feb 13 20:16:48.179882 containerd[1472]: time="2025-02-13T20:16:48.178487888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:48.183845 containerd[1472]: time="2025-02-13T20:16:48.182326525Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 20:16:48.184843 containerd[1472]: time="2025-02-13T20:16:48.184763134Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:48.190491 containerd[1472]: time="2025-02-13T20:16:48.190401956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:48.192059 containerd[1472]: time="2025-02-13T20:16:48.191982118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 667.889579ms" Feb 13 20:16:48.192844 containerd[1472]: time="2025-02-13T20:16:48.192656394Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 20:16:48.195646 containerd[1472]: time="2025-02-13T20:16:48.194156348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 20:16:48.980952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021571505.mount: Deactivated successfully. Feb 13 20:16:49.498112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:16:49.515758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:49.951278 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:49.973057 (kubelet)[1959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:50.464541 kubelet[1959]: E0213 20:16:50.464461 1959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:50.472980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:50.473278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:54.344579 containerd[1472]: time="2025-02-13T20:16:54.344450797Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Feb 13 20:16:54.347264 containerd[1472]: time="2025-02-13T20:16:54.347192280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:54.363453 containerd[1472]: time="2025-02-13T20:16:54.353837008Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:54.363453 containerd[1472]: time="2025-02-13T20:16:54.356868183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:54.364803 containerd[1472]: time="2025-02-13T20:16:54.364725911Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 6.170507394s" Feb 13 20:16:54.364803 containerd[1472]: time="2025-02-13T20:16:54.364799454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 20:16:59.211201 update_engine[1449]: I20250213 20:16:59.201051 1449 update_attempter.cc:509] Updating boot flags... Feb 13 20:16:59.397179 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2038) Feb 13 20:16:59.433949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:59.450546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:59.587077 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2040) Feb 13 20:16:59.610685 systemd[1]: Reloading requested from client PID 2049 ('systemctl') (unit session-7.scope)... Feb 13 20:16:59.610708 systemd[1]: Reloading... Feb 13 20:16:59.966977 zram_generator::config[2093]: No configuration found. Feb 13 20:17:00.288154 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:17:00.484662 systemd[1]: Reloading finished in 873 ms. Feb 13 20:17:00.595411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:00.615698 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:17:00.652378 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:00.661234 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:17:00.666054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:00.688651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:00.977277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:00.995835 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:17:01.129543 kubelet[2149]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:17:01.129543 kubelet[2149]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:17:01.129543 kubelet[2149]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:17:01.129543 kubelet[2149]: I0213 20:17:01.112053 2149 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:17:01.645525 kubelet[2149]: I0213 20:17:01.645449 2149 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:17:01.645805 kubelet[2149]: I0213 20:17:01.645785 2149 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:17:01.646517 kubelet[2149]: I0213 20:17:01.646443 2149 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:17:01.732694 kubelet[2149]: E0213 20:17:01.732629 2149 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.182.251.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.182.251.87:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:01.734227 kubelet[2149]: I0213 20:17:01.734169 2149 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:17:01.784962 kubelet[2149]: E0213 20:17:01.784421 2149 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:17:01.784962 kubelet[2149]: I0213 20:17:01.784492 2149 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:17:01.795107 kubelet[2149]: I0213 20:17:01.794511 2149 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:17:01.797723 kubelet[2149]: I0213 20:17:01.796823 2149 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:17:01.798243 kubelet[2149]: I0213 20:17:01.798157 2149 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:17:01.798530 kubelet[2149]: I0213 20:17:01.798229 2149 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-865b7d79a6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:17:01.798675 kubelet[2149]: I0213 20:17:01.798533 2149 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:17:01.798675 kubelet[2149]: I0213 20:17:01.798550 2149 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:17:01.799006 kubelet[2149]: I0213 20:17:01.798785 2149 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:01.803428 kubelet[2149]: I0213 20:17:01.802682 2149 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:17:01.803428 kubelet[2149]: I0213 20:17:01.802767 2149 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:17:01.803428 kubelet[2149]: I0213 20:17:01.802905 2149 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:17:01.803428 kubelet[2149]: I0213 20:17:01.802966 2149 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:17:01.818311 kubelet[2149]: I0213 20:17:01.817746 2149 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:17:01.826390 kubelet[2149]: I0213 20:17:01.825232 2149 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:17:01.833822 kubelet[2149]: W0213 20:17:01.832849 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.182.251.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-865b7d79a6&limit=500&resourceVersion=0": dial tcp 147.182.251.87:6443: connect: connection refused Feb 13 20:17:01.833822 kubelet[2149]: E0213 20:17:01.833279 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.182.251.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-865b7d79a6&limit=500&resourceVersion=0\": dial tcp 147.182.251.87:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:01.833822 kubelet[2149]: W0213 20:17:01.833815 2149 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:17:01.836183 kubelet[2149]: I0213 20:17:01.834917 2149 server.go:1269] "Started kubelet" Feb 13 20:17:01.844775 kubelet[2149]: W0213 20:17:01.843158 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.251.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.251.87:6443: connect: connection refused Feb 13 20:17:01.844775 kubelet[2149]: E0213 20:17:01.843304 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.251.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.251.87:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:01.847702 kubelet[2149]: I0213 20:17:01.847643 2149 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:17:01.873406 kubelet[2149]: I0213 20:17:01.870167 2149 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:17:01.873406 kubelet[2149]: I0213 20:17:01.870815 2149 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:17:01.890564 kubelet[2149]: I0213 20:17:01.890291 2149 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:17:01.891548 kubelet[2149]: I0213 20:17:01.891488 2149 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:17:01.912624 kubelet[2149]: I0213 20:17:01.893023 2149 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:17:01.912624 kubelet[2149]: E0213 20:17:01.899550 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-865b7d79a6\" not found" Feb 13 20:17:01.912624 kubelet[2149]: I0213 20:17:01.900521 2149 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:17:01.912624 kubelet[2149]: I0213 20:17:01.893447 2149 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:17:01.912624 kubelet[2149]: I0213 20:17:01.902257 2149 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:17:01.912624 kubelet[2149]: E0213 20:17:01.903020 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.251.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-865b7d79a6?timeout=10s\": dial tcp 147.182.251.87:6443: connect: connection refused" interval="200ms" Feb 13 20:17:01.912624 kubelet[2149]: W0213 20:17:01.903507 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.182.251.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.251.87:6443: connect: connection refused Feb 13 20:17:01.912624 kubelet[2149]: E0213 20:17:01.903584 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.182.251.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.182.251.87:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:01.946191 kubelet[2149]: I0213 20:17:01.946133 2149 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:17:01.946587 kubelet[2149]: I0213 20:17:01.946553 2149 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:17:01.953121 kubelet[2149]: E0213 20:17:01.937663 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.182.251.87:6443/api/v1/namespaces/default/events\": dial tcp 147.182.251.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-865b7d79a6.1823dddce1234792 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-865b7d79a6,UID:ci-4081.3.1-a-865b7d79a6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-865b7d79a6,},FirstTimestamp:2025-02-13 20:17:01.834872722 +0000 UTC m=+0.827565631,LastTimestamp:2025-02-13 20:17:01.834872722 +0000 UTC m=+0.827565631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-865b7d79a6,}" Feb 13 20:17:01.955127 kubelet[2149]: I0213 20:17:01.954679 2149 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:17:01.958490 kubelet[2149]: E0213 20:17:01.958424 2149 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:17:01.974248 kubelet[2149]: I0213 20:17:01.973986 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:17:01.978170 kubelet[2149]: I0213 20:17:01.977091 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:17:01.978170 kubelet[2149]: I0213 20:17:01.977413 2149 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:17:01.978170 kubelet[2149]: I0213 20:17:01.977461 2149 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:17:01.978170 kubelet[2149]: E0213 20:17:01.977543 2149 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:17:01.981993 kubelet[2149]: W0213 20:17:01.981904 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.182.251.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.251.87:6443: connect: connection refused Feb 13 20:17:01.982352 kubelet[2149]: E0213 20:17:01.982255 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.182.251.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.182.251.87:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:01.989031 kubelet[2149]: I0213 20:17:01.988990 2149 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:17:01.989481 kubelet[2149]: I0213 20:17:01.989457 2149 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:17:01.989996 kubelet[2149]: I0213 20:17:01.989915 2149 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:02.014678 kubelet[2149]: E0213 20:17:02.014607 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-865b7d79a6\" not found" Feb 13 20:17:02.015604 kubelet[2149]: I0213 20:17:02.015555 2149 policy_none.go:49] "None policy: Start" Feb 13 20:17:02.017896 kubelet[2149]: I0213 20:17:02.017859 2149 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:17:02.018164 kubelet[2149]: I0213 20:17:02.018148 2149 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:17:02.084469 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:17:02.088713 kubelet[2149]: E0213 20:17:02.087396 2149 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:17:02.107435 kubelet[2149]: E0213 20:17:02.107366 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.251.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-865b7d79a6?timeout=10s\": dial tcp 147.182.251.87:6443: connect: connection refused" interval="400ms" Feb 13 20:17:02.112564 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:17:02.115660 kubelet[2149]: E0213 20:17:02.115122 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-865b7d79a6\" not found" Feb 13 20:17:02.124838 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:17:02.151837 kubelet[2149]: I0213 20:17:02.151790 2149 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:17:02.154738 kubelet[2149]: I0213 20:17:02.153360 2149 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:17:02.154738 kubelet[2149]: I0213 20:17:02.153481 2149 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:17:02.154738 kubelet[2149]: I0213 20:17:02.154043 2149 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:17:02.158152 kubelet[2149]: E0213 20:17:02.158112 2149 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-a-865b7d79a6\" not found" Feb 13 20:17:02.257204 kubelet[2149]: I0213 20:17:02.256662 2149 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.257598 kubelet[2149]: E0213 20:17:02.257563 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.182.251.87:6443/api/v1/nodes\": dial tcp 147.182.251.87:6443: connect: connection refused" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.328326 systemd[1]: Created slice kubepods-burstable-pod9abc947e6cc95ce147f090b8607432a3.slice - libcontainer container kubepods-burstable-pod9abc947e6cc95ce147f090b8607432a3.slice. Feb 13 20:17:02.330399 kubelet[2149]: I0213 20:17:02.329103 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c823964fede80c21163bba5ad847769-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-865b7d79a6\" (UID: \"9c823964fede80c21163bba5ad847769\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.330399 kubelet[2149]: I0213 20:17:02.329169 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9abc947e6cc95ce147f090b8607432a3-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-865b7d79a6\" (UID: \"9abc947e6cc95ce147f090b8607432a3\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.330399 kubelet[2149]: I0213 20:17:02.329202 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c823964fede80c21163bba5ad847769-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-865b7d79a6\" (UID: \"9c823964fede80c21163bba5ad847769\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.330399 kubelet[2149]: I0213 20:17:02.329232 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c823964fede80c21163bba5ad847769-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-865b7d79a6\" (UID: \"9c823964fede80c21163bba5ad847769\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.330399 kubelet[2149]: I0213 20:17:02.329289 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c823964fede80c21163bba5ad847769-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-865b7d79a6\" (UID: \"9c823964fede80c21163bba5ad847769\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.330744 kubelet[2149]: I0213 20:17:02.329314 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9abc947e6cc95ce147f090b8607432a3-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-865b7d79a6\" (UID: \"9abc947e6cc95ce147f090b8607432a3\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.330744 kubelet[2149]: I0213 20:17:02.329343 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9abc947e6cc95ce147f090b8607432a3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-865b7d79a6\" (UID: \"9abc947e6cc95ce147f090b8607432a3\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.330744 kubelet[2149]: I0213 20:17:02.329392 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c823964fede80c21163bba5ad847769-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-865b7d79a6\" (UID: \"9c823964fede80c21163bba5ad847769\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.330744 kubelet[2149]: I0213 20:17:02.329422 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd07e811f1c530b87babff0221956c41-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-865b7d79a6\" (UID: \"cd07e811f1c530b87babff0221956c41\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.346658 systemd[1]: Created slice kubepods-burstable-pod9c823964fede80c21163bba5ad847769.slice - libcontainer container kubepods-burstable-pod9c823964fede80c21163bba5ad847769.slice. Feb 13 20:17:02.376994 systemd[1]: Created slice kubepods-burstable-podcd07e811f1c530b87babff0221956c41.slice - libcontainer container kubepods-burstable-podcd07e811f1c530b87babff0221956c41.slice. Feb 13 20:17:02.460262 kubelet[2149]: I0213 20:17:02.459491 2149 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.460262 kubelet[2149]: E0213 20:17:02.460113 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.182.251.87:6443/api/v1/nodes\": dial tcp 147.182.251.87:6443: connect: connection refused" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.509089 kubelet[2149]: E0213 20:17:02.508761 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.251.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-865b7d79a6?timeout=10s\": dial tcp 147.182.251.87:6443: connect: connection refused" interval="800ms" Feb 13 20:17:02.646598 kubelet[2149]: E0213 20:17:02.643466 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:02.649863 containerd[1472]: time="2025-02-13T20:17:02.649758381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-865b7d79a6,Uid:9abc947e6cc95ce147f090b8607432a3,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:02.665380 kubelet[2149]: E0213 20:17:02.665319 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:02.674595 containerd[1472]: time="2025-02-13T20:17:02.673839816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-865b7d79a6,Uid:9c823964fede80c21163bba5ad847769,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:02.685127 kubelet[2149]: E0213 20:17:02.685064 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:02.688994 containerd[1472]: time="2025-02-13T20:17:02.687862887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-865b7d79a6,Uid:cd07e811f1c530b87babff0221956c41,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:02.754527 kubelet[2149]: W0213 20:17:02.754371 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.251.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.251.87:6443: connect: connection refused Feb 13 20:17:02.754527 kubelet[2149]: E0213 20:17:02.754470 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.251.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.251.87:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:02.865800 kubelet[2149]: I0213 20:17:02.865601 2149 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.871193 kubelet[2149]: E0213 20:17:02.871113 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.182.251.87:6443/api/v1/nodes\": dial tcp 147.182.251.87:6443: connect: connection refused" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:02.911826 kubelet[2149]: W0213 20:17:02.911591 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.182.251.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.251.87:6443: connect: connection refused Feb 13 20:17:02.911826 kubelet[2149]: E0213 20:17:02.911739 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.182.251.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.182.251.87:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:02.961736 kubelet[2149]: W0213 20:17:02.961567 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.182.251.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-865b7d79a6&limit=500&resourceVersion=0": dial tcp 147.182.251.87:6443: connect: connection refused Feb 13 20:17:02.961736 kubelet[2149]: E0213 20:17:02.961680 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.182.251.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-865b7d79a6&limit=500&resourceVersion=0\": dial tcp 147.182.251.87:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:03.135678 kubelet[2149]: E0213 20:17:03.129902 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.182.251.87:6443/api/v1/namespaces/default/events\": dial tcp 147.182.251.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-865b7d79a6.1823dddce1234792 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-865b7d79a6,UID:ci-4081.3.1-a-865b7d79a6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-865b7d79a6,},FirstTimestamp:2025-02-13 20:17:01.834872722 +0000 UTC m=+0.827565631,LastTimestamp:2025-02-13 20:17:01.834872722 +0000 UTC m=+0.827565631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-865b7d79a6,}" Feb 13 20:17:03.136968 kubelet[2149]: W0213 20:17:03.136627 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.182.251.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.251.87:6443: connect: connection refused Feb 13 20:17:03.136968 kubelet[2149]: E0213 20:17:03.136727 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.182.251.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.182.251.87:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:03.310484 kubelet[2149]: E0213 20:17:03.310381 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.251.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-865b7d79a6?timeout=10s\": dial tcp 147.182.251.87:6443: connect: connection refused" interval="1.6s" Feb 13 20:17:03.334232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268308978.mount: Deactivated successfully. Feb 13 20:17:03.356809 containerd[1472]: time="2025-02-13T20:17:03.356426706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:03.371180 containerd[1472]: time="2025-02-13T20:17:03.370991353Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:17:03.371854 containerd[1472]: time="2025-02-13T20:17:03.371729846Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:03.374618 containerd[1472]: time="2025-02-13T20:17:03.374399262Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:03.387113 containerd[1472]: time="2025-02-13T20:17:03.379872151Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:03.387113 containerd[1472]: time="2025-02-13T20:17:03.386256539Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:17:03.388551 containerd[1472]: time="2025-02-13T20:17:03.387075275Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:17:03.394624 containerd[1472]: time="2025-02-13T20:17:03.394520854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:03.398632 containerd[1472]: time="2025-02-13T20:17:03.398219663Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 748.279686ms" Feb 13 20:17:03.401253 containerd[1472]: time="2025-02-13T20:17:03.401150202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 713.148763ms" Feb 13 20:17:03.459290 containerd[1472]: time="2025-02-13T20:17:03.458733229Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 784.731805ms" Feb 13 20:17:03.686862 kubelet[2149]: I0213 20:17:03.684958 2149 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:03.686862 kubelet[2149]: E0213 20:17:03.685634 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.182.251.87:6443/api/v1/nodes\": dial tcp 147.182.251.87:6443: connect: connection refused" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:03.791784 containerd[1472]: time="2025-02-13T20:17:03.791384127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:03.791784 containerd[1472]: time="2025-02-13T20:17:03.791498803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:03.791784 containerd[1472]: time="2025-02-13T20:17:03.791641781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:03.795928 containerd[1472]: time="2025-02-13T20:17:03.795069174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:03.796898 containerd[1472]: time="2025-02-13T20:17:03.796539947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:03.796898 containerd[1472]: time="2025-02-13T20:17:03.796592053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:03.796898 containerd[1472]: time="2025-02-13T20:17:03.796759443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:03.797759 containerd[1472]: time="2025-02-13T20:17:03.797532805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:03.821655 containerd[1472]: time="2025-02-13T20:17:03.821481166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:03.821655 containerd[1472]: time="2025-02-13T20:17:03.821599036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:03.822214 containerd[1472]: time="2025-02-13T20:17:03.821626021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:03.828356 containerd[1472]: time="2025-02-13T20:17:03.827152390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:03.865611 systemd[1]: Started cri-containerd-b114dfdf53d0f1101d51f0a699e618eedb841532c0736428ed1ab5de233e508a.scope - libcontainer container b114dfdf53d0f1101d51f0a699e618eedb841532c0736428ed1ab5de233e508a. Feb 13 20:17:03.873357 systemd[1]: Started cri-containerd-70aa94981ffe5e8edaddc84b85898fd54019ee10092a70ae46200ae5bbc95835.scope - libcontainer container 70aa94981ffe5e8edaddc84b85898fd54019ee10092a70ae46200ae5bbc95835. Feb 13 20:17:03.878195 kubelet[2149]: E0213 20:17:03.870565 2149 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.182.251.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.182.251.87:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:03.900401 systemd[1]: Started cri-containerd-1789d7dca20be5994f31969cef863b5bf173ba084f500d1627f6acc18e85eb63.scope - libcontainer container 1789d7dca20be5994f31969cef863b5bf173ba084f500d1627f6acc18e85eb63. Feb 13 20:17:04.038497 containerd[1472]: time="2025-02-13T20:17:04.038428839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-865b7d79a6,Uid:9abc947e6cc95ce147f090b8607432a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b114dfdf53d0f1101d51f0a699e618eedb841532c0736428ed1ab5de233e508a\"" Feb 13 20:17:04.055988 kubelet[2149]: E0213 20:17:04.053759 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:04.083419 containerd[1472]: time="2025-02-13T20:17:04.083354532Z" level=info msg="CreateContainer within sandbox \"b114dfdf53d0f1101d51f0a699e618eedb841532c0736428ed1ab5de233e508a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:17:04.100413 containerd[1472]: time="2025-02-13T20:17:04.100353821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-865b7d79a6,Uid:9c823964fede80c21163bba5ad847769,Namespace:kube-system,Attempt:0,} returns sandbox id \"70aa94981ffe5e8edaddc84b85898fd54019ee10092a70ae46200ae5bbc95835\"" Feb 13 20:17:04.102476 kubelet[2149]: E0213 20:17:04.102392 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:04.112387 containerd[1472]: time="2025-02-13T20:17:04.110148347Z" level=info msg="CreateContainer within sandbox \"70aa94981ffe5e8edaddc84b85898fd54019ee10092a70ae46200ae5bbc95835\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:17:04.130802 containerd[1472]: time="2025-02-13T20:17:04.130335443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-865b7d79a6,Uid:cd07e811f1c530b87babff0221956c41,Namespace:kube-system,Attempt:0,} returns sandbox id \"1789d7dca20be5994f31969cef863b5bf173ba084f500d1627f6acc18e85eb63\"" Feb 13 20:17:04.137982 kubelet[2149]: E0213 20:17:04.137772 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:04.150713 containerd[1472]: time="2025-02-13T20:17:04.150387278Z" level=info msg="CreateContainer within sandbox \"1789d7dca20be5994f31969cef863b5bf173ba084f500d1627f6acc18e85eb63\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:17:04.162003 containerd[1472]: time="2025-02-13T20:17:04.161755096Z" level=info msg="CreateContainer within sandbox \"b114dfdf53d0f1101d51f0a699e618eedb841532c0736428ed1ab5de233e508a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3085efb3fd01c9ca6a2a904871abf2ddaa27ba40197a7b0127c1009841b89d82\"" Feb 13 20:17:04.169317 containerd[1472]: time="2025-02-13T20:17:04.168786686Z" level=info msg="StartContainer for \"3085efb3fd01c9ca6a2a904871abf2ddaa27ba40197a7b0127c1009841b89d82\"" Feb 13 20:17:04.209308 containerd[1472]: time="2025-02-13T20:17:04.209215551Z" level=info msg="CreateContainer within sandbox \"70aa94981ffe5e8edaddc84b85898fd54019ee10092a70ae46200ae5bbc95835\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b183584c1158e349d45318c1de22b8de00f5d148fff40dcf4bdbb275d9653b23\"" Feb 13 20:17:04.211727 containerd[1472]: time="2025-02-13T20:17:04.211659431Z" level=info msg="StartContainer for \"b183584c1158e349d45318c1de22b8de00f5d148fff40dcf4bdbb275d9653b23\"" Feb 13 20:17:04.223717 containerd[1472]: time="2025-02-13T20:17:04.223591620Z" level=info msg="CreateContainer within sandbox \"1789d7dca20be5994f31969cef863b5bf173ba084f500d1627f6acc18e85eb63\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b599605b9e3c3692d09c5c4be558c04ddda018929318bc50b3fbebe2824f5836\"" Feb 13 20:17:04.225850 containerd[1472]: time="2025-02-13T20:17:04.225655628Z" level=info msg="StartContainer for \"b599605b9e3c3692d09c5c4be558c04ddda018929318bc50b3fbebe2824f5836\"" Feb 13 20:17:04.260579 systemd[1]: Started cri-containerd-3085efb3fd01c9ca6a2a904871abf2ddaa27ba40197a7b0127c1009841b89d82.scope - libcontainer container 3085efb3fd01c9ca6a2a904871abf2ddaa27ba40197a7b0127c1009841b89d82. Feb 13 20:17:04.295522 systemd[1]: Started cri-containerd-b183584c1158e349d45318c1de22b8de00f5d148fff40dcf4bdbb275d9653b23.scope - libcontainer container b183584c1158e349d45318c1de22b8de00f5d148fff40dcf4bdbb275d9653b23. Feb 13 20:17:04.366037 kubelet[2149]: W0213 20:17:04.365345 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.251.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.251.87:6443: connect: connection refused Feb 13 20:17:04.366037 kubelet[2149]: E0213 20:17:04.365400 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.251.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.251.87:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:04.369221 systemd[1]: Started cri-containerd-b599605b9e3c3692d09c5c4be558c04ddda018929318bc50b3fbebe2824f5836.scope - libcontainer container b599605b9e3c3692d09c5c4be558c04ddda018929318bc50b3fbebe2824f5836. Feb 13 20:17:04.473027 containerd[1472]: time="2025-02-13T20:17:04.471337119Z" level=info msg="StartContainer for \"b183584c1158e349d45318c1de22b8de00f5d148fff40dcf4bdbb275d9653b23\" returns successfully" Feb 13 20:17:04.486892 containerd[1472]: time="2025-02-13T20:17:04.484953662Z" level=info msg="StartContainer for \"3085efb3fd01c9ca6a2a904871abf2ddaa27ba40197a7b0127c1009841b89d82\" returns successfully" Feb 13 20:17:04.567100 containerd[1472]: time="2025-02-13T20:17:04.566797048Z" level=info msg="StartContainer for \"b599605b9e3c3692d09c5c4be558c04ddda018929318bc50b3fbebe2824f5836\" returns successfully" Feb 13 20:17:05.013192 kubelet[2149]: E0213 20:17:05.013145 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:05.024692 kubelet[2149]: E0213 20:17:05.024166 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:05.030587 kubelet[2149]: E0213 20:17:05.030468 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:05.289157 kubelet[2149]: I0213 20:17:05.288643 2149 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:06.032776 kubelet[2149]: E0213 20:17:06.032723 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:07.033285 kubelet[2149]: E0213 20:17:07.033232 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:07.484475 kubelet[2149]: E0213 20:17:07.484250 2149 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.1-a-865b7d79a6\" not found" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:07.563894 kubelet[2149]: I0213 20:17:07.563528 2149 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:07.563894 kubelet[2149]: E0213 20:17:07.563613 2149 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.1-a-865b7d79a6\": node \"ci-4081.3.1-a-865b7d79a6\" not found" Feb 13 20:17:07.839306 kubelet[2149]: I0213 20:17:07.839226 2149 apiserver.go:52] "Watching apiserver" Feb 13 20:17:07.901502 kubelet[2149]: I0213 20:17:07.901409 2149 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:17:11.062266 systemd[1]: Reloading requested from client PID 2428 ('systemctl') (unit session-7.scope)... Feb 13 20:17:11.062292 systemd[1]: Reloading... Feb 13 20:17:11.329032 zram_generator::config[2468]: No configuration found. Feb 13 20:17:11.845088 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:17:12.105514 systemd[1]: Reloading finished in 1041 ms. Feb 13 20:17:12.220839 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:12.262006 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:17:12.262365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:12.262445 systemd[1]: kubelet.service: Consumed 1.366s CPU time, 114.2M memory peak, 0B memory swap peak. Feb 13 20:17:12.274470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:12.629703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:12.635487 (kubelet)[2518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:17:12.862178 kubelet[2518]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:17:12.862178 kubelet[2518]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:17:12.862178 kubelet[2518]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:17:12.862178 kubelet[2518]: I0213 20:17:12.857769 2518 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:17:12.879574 kubelet[2518]: I0213 20:17:12.875139 2518 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:17:12.879574 kubelet[2518]: I0213 20:17:12.875182 2518 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:17:12.879574 kubelet[2518]: I0213 20:17:12.876068 2518 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:17:12.888833 kubelet[2518]: I0213 20:17:12.882664 2518 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:17:12.903565 kubelet[2518]: I0213 20:17:12.902230 2518 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:17:12.926131 kubelet[2518]: E0213 20:17:12.925912 2518 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:17:12.926670 kubelet[2518]: I0213 20:17:12.926615 2518 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:17:12.944775 kubelet[2518]: I0213 20:17:12.943777 2518 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:17:12.946185 kubelet[2518]: I0213 20:17:12.945970 2518 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:17:12.948677 kubelet[2518]: I0213 20:17:12.947878 2518 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:17:12.953488 kubelet[2518]: I0213 20:17:12.949080 2518 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-865b7d79a6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:17:12.954537 kubelet[2518]: I0213 20:17:12.954070 2518 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:17:12.954537 kubelet[2518]: I0213 20:17:12.954112 2518 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:17:12.954537 kubelet[2518]: I0213 20:17:12.954305 2518 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:12.963077 kubelet[2518]: I0213 20:17:12.962785 2518 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:17:12.963077 kubelet[2518]: I0213 20:17:12.962847 2518 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:17:12.963077 kubelet[2518]: I0213 20:17:12.962902 2518 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:17:12.963077 kubelet[2518]: I0213 20:17:12.962926 2518 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:17:12.976984 kubelet[2518]: I0213 20:17:12.975461 2518 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:17:12.976984 kubelet[2518]: I0213 20:17:12.976246 2518 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:17:12.976984 kubelet[2518]: I0213 20:17:12.976898 2518 server.go:1269] "Started kubelet" Feb 13 20:17:12.989913 kubelet[2518]: I0213 20:17:12.989601 2518 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:17:13.004673 kubelet[2518]: I0213 20:17:12.999756 2518 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:17:13.014710 kubelet[2518]: I0213 20:17:13.009885 2518 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:17:13.014710 kubelet[2518]: E0213 20:17:13.012017 2518 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-865b7d79a6\" not found" Feb 13 20:17:13.014710 kubelet[2518]: I0213 20:17:13.013169 2518 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:17:13.020363 kubelet[2518]: I0213 20:17:13.016012 2518 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:17:13.099772 kubelet[2518]: I0213 20:17:13.020565 2518 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:17:13.099772 kubelet[2518]: I0213 20:17:13.098068 2518 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:17:13.099772 kubelet[2518]: I0213 20:17:13.078627 2518 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:17:13.099772 kubelet[2518]: I0213 20:17:13.098347 2518 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:17:13.104760 kubelet[2518]: I0213 20:17:13.045912 2518 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:17:13.105739 kubelet[2518]: I0213 20:17:13.083357 2518 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:17:13.158067 kubelet[2518]: E0213 20:17:13.154235 2518 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:17:13.164526 kubelet[2518]: I0213 20:17:13.164066 2518 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:17:13.193022 kubelet[2518]: I0213 20:17:13.182272 2518 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:17:13.206654 kubelet[2518]: I0213 20:17:13.205726 2518 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:17:13.206654 kubelet[2518]: I0213 20:17:13.205802 2518 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:17:13.206654 kubelet[2518]: I0213 20:17:13.205832 2518 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:17:13.206654 kubelet[2518]: E0213 20:17:13.205911 2518 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:17:13.310116 kubelet[2518]: E0213 20:17:13.309627 2518 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:17:13.374461 kubelet[2518]: I0213 20:17:13.374430 2518 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:17:13.374982 kubelet[2518]: I0213 20:17:13.374654 2518 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:17:13.374982 kubelet[2518]: I0213 20:17:13.374683 2518 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:13.374982 kubelet[2518]: I0213 20:17:13.374888 2518 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:17:13.374982 kubelet[2518]: I0213 20:17:13.374903 2518 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:17:13.374982 kubelet[2518]: I0213 20:17:13.374929 2518 policy_none.go:49] "None policy: Start" Feb 13 20:17:13.376429 kubelet[2518]: I0213 20:17:13.375993 2518 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:17:13.376429 kubelet[2518]: I0213 20:17:13.376028 2518 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:17:13.376429 kubelet[2518]: I0213 20:17:13.376280 2518 state_mem.go:75] "Updated machine memory state" Feb 13 20:17:13.395646 kubelet[2518]: I0213 20:17:13.394351 2518 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:17:13.395646 kubelet[2518]: I0213 20:17:13.395409 2518 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:17:13.395646 kubelet[2518]: I0213 20:17:13.395436 2518 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:17:13.398622 kubelet[2518]: I0213 20:17:13.397125 2518 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:17:13.536142 kubelet[2518]: I0213 20:17:13.534966 2518 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:13.593680 kubelet[2518]: W0213 20:17:13.593622 2518 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:17:13.596182 kubelet[2518]: W0213 20:17:13.596145 2518 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:17:13.602384 kubelet[2518]: W0213 20:17:13.602241 2518 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:17:13.614973 kubelet[2518]: I0213 20:17:13.614208 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c823964fede80c21163bba5ad847769-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-865b7d79a6\" (UID: \"9c823964fede80c21163bba5ad847769\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:13.614973 kubelet[2518]: I0213 20:17:13.614275 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9abc947e6cc95ce147f090b8607432a3-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-865b7d79a6\" (UID: \"9abc947e6cc95ce147f090b8607432a3\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:13.614973 kubelet[2518]: I0213 20:17:13.614307 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c823964fede80c21163bba5ad847769-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-865b7d79a6\" (UID: \"9c823964fede80c21163bba5ad847769\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:13.614973 kubelet[2518]: I0213 20:17:13.614340 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c823964fede80c21163bba5ad847769-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-865b7d79a6\" (UID: \"9c823964fede80c21163bba5ad847769\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:13.614973 kubelet[2518]: I0213 20:17:13.614369 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c823964fede80c21163bba5ad847769-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-865b7d79a6\" (UID: \"9c823964fede80c21163bba5ad847769\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:13.615369 kubelet[2518]: I0213 20:17:13.614396 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9abc947e6cc95ce147f090b8607432a3-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-865b7d79a6\" (UID: \"9abc947e6cc95ce147f090b8607432a3\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:13.615369 kubelet[2518]: I0213 20:17:13.614425 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9abc947e6cc95ce147f090b8607432a3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-865b7d79a6\" (UID: \"9abc947e6cc95ce147f090b8607432a3\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:13.615369 kubelet[2518]: I0213 20:17:13.614452 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c823964fede80c21163bba5ad847769-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-865b7d79a6\" (UID: \"9c823964fede80c21163bba5ad847769\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:13.615369 kubelet[2518]: I0213 20:17:13.614481 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd07e811f1c530b87babff0221956c41-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-865b7d79a6\" (UID: \"cd07e811f1c530b87babff0221956c41\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:13.618975 kubelet[2518]: I0213 20:17:13.617816 2518 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:13.618975 kubelet[2518]: I0213 20:17:13.618732 2518 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.1-a-865b7d79a6" Feb 13 20:17:13.688147 kernel: hrtimer: interrupt took 7606233 ns Feb 13 20:17:13.896331 kubelet[2518]: E0213 20:17:13.895469 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:13.899380 kubelet[2518]: E0213 20:17:13.898551 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:13.904387 kubelet[2518]: E0213 20:17:13.904342 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:13.973665 kubelet[2518]: I0213 20:17:13.973242 2518 apiserver.go:52] "Watching apiserver" Feb 13 20:17:14.017185 kubelet[2518]: I0213 20:17:14.016085 2518 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:17:14.298900 kubelet[2518]: E0213 20:17:14.298515 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:14.301399 kubelet[2518]: E0213 20:17:14.301181 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:14.302797 kubelet[2518]: E0213 20:17:14.302743 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:14.423648 kubelet[2518]: I0213 20:17:14.423363 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-a-865b7d79a6" podStartSLOduration=1.423201178 podStartE2EDuration="1.423201178s" podCreationTimestamp="2025-02-13 20:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:14.422879596 +0000 UTC m=+1.723875521" watchObservedRunningTime="2025-02-13 20:17:14.423201178 +0000 UTC m=+1.724197093" Feb 13 20:17:14.526483 kubelet[2518]: I0213 20:17:14.525840 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-865b7d79a6" podStartSLOduration=1.52577063 podStartE2EDuration="1.52577063s" podCreationTimestamp="2025-02-13 20:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:14.52538024 +0000 UTC m=+1.826376168" watchObservedRunningTime="2025-02-13 20:17:14.52577063 +0000 UTC m=+1.826766529" Feb 13 20:17:14.526483 kubelet[2518]: I0213 20:17:14.526045 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-a-865b7d79a6" podStartSLOduration=1.526032955 podStartE2EDuration="1.526032955s" podCreationTimestamp="2025-02-13 20:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:14.466826899 +0000 UTC m=+1.767822824" watchObservedRunningTime="2025-02-13 20:17:14.526032955 +0000 UTC m=+1.827028887" Feb 13 20:17:15.324465 kubelet[2518]: E0213 20:17:15.324300 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:15.568792 sudo[1627]: pam_unix(sudo:session): session closed for user root Feb 13 20:17:15.588891 sshd[1624]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:15.601971 systemd[1]: sshd@6-147.182.251.87:22-147.75.109.163:38136.service: Deactivated successfully. Feb 13 20:17:15.606848 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:17:15.607477 systemd[1]: session-7.scope: Consumed 6.301s CPU time, 154.7M memory peak, 0B memory swap peak. Feb 13 20:17:15.616298 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:17:15.619448 systemd-logind[1448]: Removed session 7. Feb 13 20:17:16.015002 kubelet[2518]: I0213 20:17:16.014849 2518 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:17:16.016047 containerd[1472]: time="2025-02-13T20:17:16.015882580Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:17:16.017980 kubelet[2518]: I0213 20:17:16.017151 2518 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:17:16.042849 kubelet[2518]: E0213 20:17:16.041765 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:16.305299 kubelet[2518]: E0213 20:17:16.305173 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:16.592756 kubelet[2518]: W0213 20:17:16.592611 2518 reflector.go:561] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-4081.3.1-a-865b7d79a6" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4081.3.1-a-865b7d79a6' and this object Feb 13 20:17:16.593698 kubelet[2518]: W0213 20:17:16.593104 2518 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.1-a-865b7d79a6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.1-a-865b7d79a6' and this object Feb 13 20:17:16.593873 kubelet[2518]: E0213 20:17:16.593113 2518 reflector.go:158] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-flannel-cfg\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-flannel-cfg\" is forbidden: User \"system:node:ci-4081.3.1-a-865b7d79a6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'ci-4081.3.1-a-865b7d79a6' and this object" logger="UnhandledError" Feb 13 20:17:16.594087 kubelet[2518]: W0213 20:17:16.592713 2518 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081.3.1-a-865b7d79a6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.1-a-865b7d79a6' and this object Feb 13 20:17:16.594087 kubelet[2518]: E0213 20:17:16.593976 2518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4081.3.1-a-865b7d79a6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.1-a-865b7d79a6' and this object" logger="UnhandledError" Feb 13 20:17:16.596013 kubelet[2518]: E0213 20:17:16.595058 2518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.1-a-865b7d79a6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.1-a-865b7d79a6' and this object" logger="UnhandledError" Feb 13 20:17:16.597555 systemd[1]: Created slice kubepods-besteffort-podd4f7e099_21f5_4927_a4c9_e8a51d506601.slice - libcontainer container kubepods-besteffort-podd4f7e099_21f5_4927_a4c9_e8a51d506601.slice. Feb 13 20:17:16.638370 systemd[1]: Created slice kubepods-burstable-pod3cab59e8_dcb7_4ce4_a1b7_f69e4f1ceacd.slice - libcontainer container kubepods-burstable-pod3cab59e8_dcb7_4ce4_a1b7_f69e4f1ceacd.slice. Feb 13 20:17:16.762322 kubelet[2518]: I0213 20:17:16.762234 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4f7e099-21f5-4927-a4c9-e8a51d506601-kube-proxy\") pod \"kube-proxy-hf6gc\" (UID: \"d4f7e099-21f5-4927-a4c9-e8a51d506601\") " pod="kube-system/kube-proxy-hf6gc" Feb 13 20:17:16.762528 kubelet[2518]: I0213 20:17:16.762338 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd-run\") pod \"kube-flannel-ds-wtsq2\" (UID: \"3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd\") " pod="kube-flannel/kube-flannel-ds-wtsq2" Feb 13 20:17:16.762528 kubelet[2518]: I0213 20:17:16.762377 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88mdp\" (UniqueName: \"kubernetes.io/projected/d4f7e099-21f5-4927-a4c9-e8a51d506601-kube-api-access-88mdp\") pod \"kube-proxy-hf6gc\" (UID: \"d4f7e099-21f5-4927-a4c9-e8a51d506601\") " pod="kube-system/kube-proxy-hf6gc" Feb 13 20:17:16.762528 kubelet[2518]: I0213 20:17:16.762412 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvn5s\" (UniqueName: \"kubernetes.io/projected/3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd-kube-api-access-fvn5s\") pod \"kube-flannel-ds-wtsq2\" (UID: \"3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd\") " pod="kube-flannel/kube-flannel-ds-wtsq2" Feb 13 20:17:16.762528 kubelet[2518]: I0213 20:17:16.762436 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd-cni\") pod \"kube-flannel-ds-wtsq2\" (UID: \"3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd\") " pod="kube-flannel/kube-flannel-ds-wtsq2" Feb 13 20:17:16.762528 kubelet[2518]: I0213 20:17:16.762459 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4f7e099-21f5-4927-a4c9-e8a51d506601-xtables-lock\") pod \"kube-proxy-hf6gc\" (UID: \"d4f7e099-21f5-4927-a4c9-e8a51d506601\") " pod="kube-system/kube-proxy-hf6gc" Feb 13 20:17:16.762759 kubelet[2518]: I0213 20:17:16.762487 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4f7e099-21f5-4927-a4c9-e8a51d506601-lib-modules\") pod \"kube-proxy-hf6gc\" (UID: \"d4f7e099-21f5-4927-a4c9-e8a51d506601\") " pod="kube-system/kube-proxy-hf6gc" Feb 13 20:17:16.762759 kubelet[2518]: I0213 20:17:16.762510 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd-cni-plugin\") pod \"kube-flannel-ds-wtsq2\" (UID: \"3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd\") " pod="kube-flannel/kube-flannel-ds-wtsq2" Feb 13 20:17:16.762759 kubelet[2518]: I0213 20:17:16.762536 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd-flannel-cfg\") pod \"kube-flannel-ds-wtsq2\" (UID: \"3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd\") " pod="kube-flannel/kube-flannel-ds-wtsq2" Feb 13 20:17:16.762759 kubelet[2518]: I0213 20:17:16.762592 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd-xtables-lock\") pod \"kube-flannel-ds-wtsq2\" (UID: \"3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd\") " pod="kube-flannel/kube-flannel-ds-wtsq2" Feb 13 20:17:17.865300 kubelet[2518]: E0213 20:17:17.865247 2518 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:17:17.865797 kubelet[2518]: E0213 20:17:17.865375 2518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd-flannel-cfg podName:3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd nodeName:}" failed. No retries permitted until 2025-02-13 20:17:18.365346616 +0000 UTC m=+5.666342532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd-flannel-cfg") pod "kube-flannel-ds-wtsq2" (UID: "3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd") : failed to sync configmap cache: timed out waiting for the condition Feb 13 20:17:18.121723 kubelet[2518]: E0213 20:17:18.120655 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:18.122615 containerd[1472]: time="2025-02-13T20:17:18.122491624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hf6gc,Uid:d4f7e099-21f5-4927-a4c9-e8a51d506601,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:18.187143 containerd[1472]: time="2025-02-13T20:17:18.186684260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:18.187143 containerd[1472]: time="2025-02-13T20:17:18.186824868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:18.187143 containerd[1472]: time="2025-02-13T20:17:18.186850660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:18.187566 containerd[1472]: time="2025-02-13T20:17:18.187313908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:18.251813 systemd[1]: Started cri-containerd-82b04ca33b9d0d2925a4aaf643b5f6bde7b7b58786128ee64c01ea8ad03d9f5b.scope - libcontainer container 82b04ca33b9d0d2925a4aaf643b5f6bde7b7b58786128ee64c01ea8ad03d9f5b. Feb 13 20:17:18.293234 containerd[1472]: time="2025-02-13T20:17:18.293131268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hf6gc,Uid:d4f7e099-21f5-4927-a4c9-e8a51d506601,Namespace:kube-system,Attempt:0,} returns sandbox id \"82b04ca33b9d0d2925a4aaf643b5f6bde7b7b58786128ee64c01ea8ad03d9f5b\"" Feb 13 20:17:18.295726 kubelet[2518]: E0213 20:17:18.295390 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:18.304872 containerd[1472]: time="2025-02-13T20:17:18.304795437Z" level=info msg="CreateContainer within sandbox \"82b04ca33b9d0d2925a4aaf643b5f6bde7b7b58786128ee64c01ea8ad03d9f5b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:17:18.360282 containerd[1472]: time="2025-02-13T20:17:18.360169104Z" level=info msg="CreateContainer within sandbox \"82b04ca33b9d0d2925a4aaf643b5f6bde7b7b58786128ee64c01ea8ad03d9f5b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70edbfc89a22b806c4f6414023229446fba90d2a69b6814c8491f695d2a87c56\"" Feb 13 20:17:18.364457 containerd[1472]: time="2025-02-13T20:17:18.362381051Z" level=info msg="StartContainer for \"70edbfc89a22b806c4f6414023229446fba90d2a69b6814c8491f695d2a87c56\"" Feb 13 20:17:18.429264 systemd[1]: Started cri-containerd-70edbfc89a22b806c4f6414023229446fba90d2a69b6814c8491f695d2a87c56.scope - libcontainer container 70edbfc89a22b806c4f6414023229446fba90d2a69b6814c8491f695d2a87c56. Feb 13 20:17:18.445795 kubelet[2518]: E0213 20:17:18.445352 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:18.447136 containerd[1472]: time="2025-02-13T20:17:18.447065420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wtsq2,Uid:3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:17:18.549602 containerd[1472]: time="2025-02-13T20:17:18.549375818Z" level=info msg="StartContainer for \"70edbfc89a22b806c4f6414023229446fba90d2a69b6814c8491f695d2a87c56\" returns successfully" Feb 13 20:17:18.580566 containerd[1472]: time="2025-02-13T20:17:18.579683798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:18.580566 containerd[1472]: time="2025-02-13T20:17:18.579797308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:18.580566 containerd[1472]: time="2025-02-13T20:17:18.579845605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:18.580566 containerd[1472]: time="2025-02-13T20:17:18.580057928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:18.633024 systemd[1]: Started cri-containerd-317d4edcbde500eb30f5e49e4fd832b768b128204be904958386795518ded63e.scope - libcontainer container 317d4edcbde500eb30f5e49e4fd832b768b128204be904958386795518ded63e. Feb 13 20:17:18.745343 containerd[1472]: time="2025-02-13T20:17:18.744717478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wtsq2,Uid:3cab59e8-dcb7-4ce4-a1b7-f69e4f1ceacd,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"317d4edcbde500eb30f5e49e4fd832b768b128204be904958386795518ded63e\"" Feb 13 20:17:18.749647 kubelet[2518]: E0213 20:17:18.748389 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:18.750990 containerd[1472]: time="2025-02-13T20:17:18.750909298Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:17:19.335681 kubelet[2518]: E0213 20:17:19.335543 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:19.616661 kubelet[2518]: E0213 20:17:19.606671 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:19.640149 kubelet[2518]: I0213 20:17:19.638695 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hf6gc" podStartSLOduration=3.638665533 podStartE2EDuration="3.638665533s" podCreationTimestamp="2025-02-13 20:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:19.357439845 +0000 UTC m=+6.658435774" watchObservedRunningTime="2025-02-13 20:17:19.638665533 +0000 UTC m=+6.939661457" Feb 13 20:17:20.342200 kubelet[2518]: E0213 20:17:20.341032 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:20.345797 kubelet[2518]: E0213 20:17:20.345742 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:21.360805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount138487886.mount: Deactivated successfully. Feb 13 20:17:21.457280 containerd[1472]: time="2025-02-13T20:17:21.457163124Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:21.461152 containerd[1472]: time="2025-02-13T20:17:21.461028713Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Feb 13 20:17:21.462259 containerd[1472]: time="2025-02-13T20:17:21.462163249Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:21.469870 containerd[1472]: time="2025-02-13T20:17:21.469752226Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:21.472493 containerd[1472]: time="2025-02-13T20:17:21.471509481Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.720509904s" Feb 13 20:17:21.472493 containerd[1472]: time="2025-02-13T20:17:21.472014638Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Feb 13 20:17:21.478631 containerd[1472]: time="2025-02-13T20:17:21.478566934Z" level=info msg="CreateContainer within sandbox \"317d4edcbde500eb30f5e49e4fd832b768b128204be904958386795518ded63e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 20:17:21.513693 containerd[1472]: time="2025-02-13T20:17:21.511379882Z" level=info msg="CreateContainer within sandbox \"317d4edcbde500eb30f5e49e4fd832b768b128204be904958386795518ded63e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"2043cce0f17731bfb7d8be55f10e4a0f6dca4e70ddab2d28b0d01645ce23f0fa\"" Feb 13 20:17:21.514392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount523382072.mount: Deactivated successfully. Feb 13 20:17:21.516481 containerd[1472]: time="2025-02-13T20:17:21.516336677Z" level=info msg="StartContainer for \"2043cce0f17731bfb7d8be55f10e4a0f6dca4e70ddab2d28b0d01645ce23f0fa\"" Feb 13 20:17:21.584609 systemd[1]: Started cri-containerd-2043cce0f17731bfb7d8be55f10e4a0f6dca4e70ddab2d28b0d01645ce23f0fa.scope - libcontainer container 2043cce0f17731bfb7d8be55f10e4a0f6dca4e70ddab2d28b0d01645ce23f0fa. Feb 13 20:17:21.647068 systemd[1]: cri-containerd-2043cce0f17731bfb7d8be55f10e4a0f6dca4e70ddab2d28b0d01645ce23f0fa.scope: Deactivated successfully. Feb 13 20:17:21.654962 containerd[1472]: time="2025-02-13T20:17:21.654848599Z" level=info msg="StartContainer for \"2043cce0f17731bfb7d8be55f10e4a0f6dca4e70ddab2d28b0d01645ce23f0fa\" returns successfully" Feb 13 20:17:21.767497 containerd[1472]: time="2025-02-13T20:17:21.767053121Z" level=info msg="shim disconnected" id=2043cce0f17731bfb7d8be55f10e4a0f6dca4e70ddab2d28b0d01645ce23f0fa namespace=k8s.io Feb 13 20:17:21.767497 containerd[1472]: time="2025-02-13T20:17:21.767174326Z" level=warning msg="cleaning up after shim disconnected" id=2043cce0f17731bfb7d8be55f10e4a0f6dca4e70ddab2d28b0d01645ce23f0fa namespace=k8s.io Feb 13 20:17:21.767497 containerd[1472]: time="2025-02-13T20:17:21.767189384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:17:22.125106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2043cce0f17731bfb7d8be55f10e4a0f6dca4e70ddab2d28b0d01645ce23f0fa-rootfs.mount: Deactivated successfully. Feb 13 20:17:22.351313 kubelet[2518]: E0213 20:17:22.350590 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:22.363036 kubelet[2518]: E0213 20:17:22.361848 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:22.364374 containerd[1472]: time="2025-02-13T20:17:22.364281428Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:17:23.367170 kubelet[2518]: E0213 20:17:23.367119 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:24.370196 kubelet[2518]: E0213 20:17:24.369606 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:24.996583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3859852840.mount: Deactivated successfully. Feb 13 20:17:26.368970 containerd[1472]: time="2025-02-13T20:17:26.367597345Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:26.371680 containerd[1472]: time="2025-02-13T20:17:26.371594270Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Feb 13 20:17:26.373049 containerd[1472]: time="2025-02-13T20:17:26.372979278Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:26.379828 containerd[1472]: time="2025-02-13T20:17:26.379755360Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:26.381600 containerd[1472]: time="2025-02-13T20:17:26.381551919Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.017160025s" Feb 13 20:17:26.381789 containerd[1472]: time="2025-02-13T20:17:26.381772401Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Feb 13 20:17:26.388463 containerd[1472]: time="2025-02-13T20:17:26.388394364Z" level=info msg="CreateContainer within sandbox \"317d4edcbde500eb30f5e49e4fd832b768b128204be904958386795518ded63e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:17:26.417095 containerd[1472]: time="2025-02-13T20:17:26.416930000Z" level=info msg="CreateContainer within sandbox \"317d4edcbde500eb30f5e49e4fd832b768b128204be904958386795518ded63e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5d3bb05ae6d9a1dd3b4fc207cd681527dee26c9e10a4feae20617280f6a2b3ab\"" Feb 13 20:17:26.419217 containerd[1472]: time="2025-02-13T20:17:26.419097857Z" level=info msg="StartContainer for \"5d3bb05ae6d9a1dd3b4fc207cd681527dee26c9e10a4feae20617280f6a2b3ab\"" Feb 13 20:17:26.472718 systemd[1]: Started cri-containerd-5d3bb05ae6d9a1dd3b4fc207cd681527dee26c9e10a4feae20617280f6a2b3ab.scope - libcontainer container 5d3bb05ae6d9a1dd3b4fc207cd681527dee26c9e10a4feae20617280f6a2b3ab. Feb 13 20:17:26.517613 systemd[1]: cri-containerd-5d3bb05ae6d9a1dd3b4fc207cd681527dee26c9e10a4feae20617280f6a2b3ab.scope: Deactivated successfully. Feb 13 20:17:26.521657 containerd[1472]: time="2025-02-13T20:17:26.521452207Z" level=info msg="StartContainer for \"5d3bb05ae6d9a1dd3b4fc207cd681527dee26c9e10a4feae20617280f6a2b3ab\" returns successfully" Feb 13 20:17:26.561213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d3bb05ae6d9a1dd3b4fc207cd681527dee26c9e10a4feae20617280f6a2b3ab-rootfs.mount: Deactivated successfully. Feb 13 20:17:26.570215 kubelet[2518]: I0213 20:17:26.569437 2518 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 20:17:26.683435 systemd[1]: Created slice kubepods-burstable-pod1859f958_5a85_4d5a_ae6c_2db5887e6c66.slice - libcontainer container kubepods-burstable-pod1859f958_5a85_4d5a_ae6c_2db5887e6c66.slice. Feb 13 20:17:26.743964 kubelet[2518]: I0213 20:17:26.743667 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1859f958-5a85-4d5a-ae6c-2db5887e6c66-config-volume\") pod \"coredns-6f6b679f8f-9hlcl\" (UID: \"1859f958-5a85-4d5a-ae6c-2db5887e6c66\") " pod="kube-system/coredns-6f6b679f8f-9hlcl" Feb 13 20:17:26.743964 kubelet[2518]: I0213 20:17:26.743839 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8qgr\" (UniqueName: \"kubernetes.io/projected/1859f958-5a85-4d5a-ae6c-2db5887e6c66-kube-api-access-b8qgr\") pod \"coredns-6f6b679f8f-9hlcl\" (UID: \"1859f958-5a85-4d5a-ae6c-2db5887e6c66\") " pod="kube-system/coredns-6f6b679f8f-9hlcl" Feb 13 20:17:26.801296 containerd[1472]: time="2025-02-13T20:17:26.801174453Z" level=info msg="shim disconnected" id=5d3bb05ae6d9a1dd3b4fc207cd681527dee26c9e10a4feae20617280f6a2b3ab namespace=k8s.io Feb 13 20:17:26.802717 containerd[1472]: time="2025-02-13T20:17:26.801546023Z" level=warning msg="cleaning up after shim disconnected" id=5d3bb05ae6d9a1dd3b4fc207cd681527dee26c9e10a4feae20617280f6a2b3ab namespace=k8s.io Feb 13 20:17:26.802717 containerd[1472]: time="2025-02-13T20:17:26.801568075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:17:26.807001 systemd[1]: Created slice kubepods-burstable-pode1797f93_fa5b_4be2_8580_ade2203b8a0c.slice - libcontainer container kubepods-burstable-pode1797f93_fa5b_4be2_8580_ade2203b8a0c.slice. Feb 13 20:17:26.845021 kubelet[2518]: I0213 20:17:26.844721 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1797f93-fa5b-4be2-8580-ade2203b8a0c-config-volume\") pod \"coredns-6f6b679f8f-nnrpm\" (UID: \"e1797f93-fa5b-4be2-8580-ade2203b8a0c\") " pod="kube-system/coredns-6f6b679f8f-nnrpm" Feb 13 20:17:26.845021 kubelet[2518]: I0213 20:17:26.844779 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmlt9\" (UniqueName: \"kubernetes.io/projected/e1797f93-fa5b-4be2-8580-ade2203b8a0c-kube-api-access-dmlt9\") pod \"coredns-6f6b679f8f-nnrpm\" (UID: \"e1797f93-fa5b-4be2-8580-ade2203b8a0c\") " pod="kube-system/coredns-6f6b679f8f-nnrpm" Feb 13 20:17:26.994901 kubelet[2518]: E0213 20:17:26.994010 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:26.995779 containerd[1472]: time="2025-02-13T20:17:26.995713496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9hlcl,Uid:1859f958-5a85-4d5a-ae6c-2db5887e6c66,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:27.051667 containerd[1472]: time="2025-02-13T20:17:27.051203949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9hlcl,Uid:1859f958-5a85-4d5a-ae6c-2db5887e6c66,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e87bccc820e3784386e50e77360de03dcac1a8cdd3bfc0665f72833ac48ba98e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 20:17:27.052829 kubelet[2518]: E0213 20:17:27.052038 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e87bccc820e3784386e50e77360de03dcac1a8cdd3bfc0665f72833ac48ba98e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 20:17:27.052829 kubelet[2518]: E0213 20:17:27.052163 2518 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e87bccc820e3784386e50e77360de03dcac1a8cdd3bfc0665f72833ac48ba98e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-9hlcl" Feb 13 20:17:27.052829 kubelet[2518]: E0213 20:17:27.052207 2518 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e87bccc820e3784386e50e77360de03dcac1a8cdd3bfc0665f72833ac48ba98e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-9hlcl" Feb 13 20:17:27.052829 kubelet[2518]: E0213 20:17:27.052280 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9hlcl_kube-system(1859f958-5a85-4d5a-ae6c-2db5887e6c66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9hlcl_kube-system(1859f958-5a85-4d5a-ae6c-2db5887e6c66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e87bccc820e3784386e50e77360de03dcac1a8cdd3bfc0665f72833ac48ba98e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-9hlcl" podUID="1859f958-5a85-4d5a-ae6c-2db5887e6c66" Feb 13 20:17:27.117362 kubelet[2518]: E0213 20:17:27.117004 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:27.121012 containerd[1472]: time="2025-02-13T20:17:27.120633789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nnrpm,Uid:e1797f93-fa5b-4be2-8580-ade2203b8a0c,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:27.160468 containerd[1472]: time="2025-02-13T20:17:27.160315041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nnrpm,Uid:e1797f93-fa5b-4be2-8580-ade2203b8a0c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2417b8fd952dbc49096bfb47df84718b26f61a0a13e0b254575fe3bcf5a25f1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 20:17:27.161888 kubelet[2518]: E0213 20:17:27.161495 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2417b8fd952dbc49096bfb47df84718b26f61a0a13e0b254575fe3bcf5a25f1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 20:17:27.161888 kubelet[2518]: E0213 20:17:27.161665 2518 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2417b8fd952dbc49096bfb47df84718b26f61a0a13e0b254575fe3bcf5a25f1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-nnrpm" Feb 13 20:17:27.161888 kubelet[2518]: E0213 20:17:27.161717 2518 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2417b8fd952dbc49096bfb47df84718b26f61a0a13e0b254575fe3bcf5a25f1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-nnrpm" Feb 13 20:17:27.161888 kubelet[2518]: E0213 20:17:27.161829 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-nnrpm_kube-system(e1797f93-fa5b-4be2-8580-ade2203b8a0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-nnrpm_kube-system(e1797f93-fa5b-4be2-8580-ade2203b8a0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2417b8fd952dbc49096bfb47df84718b26f61a0a13e0b254575fe3bcf5a25f1\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-nnrpm" podUID="e1797f93-fa5b-4be2-8580-ade2203b8a0c" Feb 13 20:17:27.385875 kubelet[2518]: E0213 20:17:27.385122 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:27.393692 containerd[1472]: time="2025-02-13T20:17:27.391792739Z" level=info msg="CreateContainer within sandbox \"317d4edcbde500eb30f5e49e4fd832b768b128204be904958386795518ded63e\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 20:17:27.439149 containerd[1472]: time="2025-02-13T20:17:27.438717762Z" level=info msg="CreateContainer within sandbox \"317d4edcbde500eb30f5e49e4fd832b768b128204be904958386795518ded63e\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"6474fa16cfb19449fb75f78128d43455ad8445db48fa2b8eed101a7fbfa3c119\"" Feb 13 20:17:27.442759 containerd[1472]: time="2025-02-13T20:17:27.440922539Z" level=info msg="StartContainer for \"6474fa16cfb19449fb75f78128d43455ad8445db48fa2b8eed101a7fbfa3c119\"" Feb 13 20:17:27.515311 systemd[1]: Started cri-containerd-6474fa16cfb19449fb75f78128d43455ad8445db48fa2b8eed101a7fbfa3c119.scope - libcontainer container 6474fa16cfb19449fb75f78128d43455ad8445db48fa2b8eed101a7fbfa3c119. Feb 13 20:17:27.574483 containerd[1472]: time="2025-02-13T20:17:27.574390632Z" level=info msg="StartContainer for \"6474fa16cfb19449fb75f78128d43455ad8445db48fa2b8eed101a7fbfa3c119\" returns successfully" Feb 13 20:17:28.400108 kubelet[2518]: E0213 20:17:28.399929 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:28.649356 kubelet[2518]: I0213 20:17:28.649233 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-wtsq2" podStartSLOduration=5.015532788 podStartE2EDuration="12.649176121s" podCreationTimestamp="2025-02-13 20:17:16 +0000 UTC" firstStartedPulling="2025-02-13 20:17:18.750116329 +0000 UTC m=+6.051112230" lastFinishedPulling="2025-02-13 20:17:26.383759647 +0000 UTC m=+13.684755563" observedRunningTime="2025-02-13 20:17:28.645540368 +0000 UTC m=+15.946536298" watchObservedRunningTime="2025-02-13 20:17:28.649176121 +0000 UTC m=+15.950172058" Feb 13 20:17:28.891188 systemd-networkd[1367]: flannel.1: Link UP Feb 13 20:17:28.891201 systemd-networkd[1367]: flannel.1: Gained carrier Feb 13 20:17:29.401595 kubelet[2518]: E0213 20:17:29.401349 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:30.690797 systemd-networkd[1367]: flannel.1: Gained IPv6LL Feb 13 20:17:39.213353 kubelet[2518]: E0213 20:17:39.212505 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:39.215479 containerd[1472]: time="2025-02-13T20:17:39.214340172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9hlcl,Uid:1859f958-5a85-4d5a-ae6c-2db5887e6c66,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:39.308569 systemd-networkd[1367]: cni0: Link UP Feb 13 20:17:39.308578 systemd-networkd[1367]: cni0: Gained carrier Feb 13 20:17:39.322032 kernel: cni0: port 1(vethff65aa72) entered blocking state Feb 13 20:17:39.322189 kernel: cni0: port 1(vethff65aa72) entered disabled state Feb 13 20:17:39.319395 systemd-networkd[1367]: vethff65aa72: Link UP Feb 13 20:17:39.324977 kernel: vethff65aa72: entered allmulticast mode Feb 13 20:17:39.327265 kernel: vethff65aa72: entered promiscuous mode Feb 13 20:17:39.330597 systemd-networkd[1367]: cni0: Lost carrier Feb 13 20:17:39.350121 kernel: cni0: port 1(vethff65aa72) entered blocking state Feb 13 20:17:39.350275 kernel: cni0: port 1(vethff65aa72) entered forwarding state Feb 13 20:17:39.351107 systemd-networkd[1367]: vethff65aa72: Gained carrier Feb 13 20:17:39.358695 systemd-networkd[1367]: cni0: Gained carrier Feb 13 20:17:39.371711 containerd[1472]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} Feb 13 20:17:39.371711 containerd[1472]: delegateAdd: netconf sent to delegate plugin: Feb 13 20:17:39.423124 containerd[1472]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T20:17:39.421272665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:39.423124 containerd[1472]: time="2025-02-13T20:17:39.421399367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:39.423124 containerd[1472]: time="2025-02-13T20:17:39.421419300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:39.423124 containerd[1472]: time="2025-02-13T20:17:39.421549561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:39.493280 systemd[1]: Started cri-containerd-f79c2543c856b516ce11b06c7b311d56f1bab6079aa90eae8e52834660ec1de6.scope - libcontainer container f79c2543c856b516ce11b06c7b311d56f1bab6079aa90eae8e52834660ec1de6. Feb 13 20:17:39.581836 containerd[1472]: time="2025-02-13T20:17:39.581633034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9hlcl,Uid:1859f958-5a85-4d5a-ae6c-2db5887e6c66,Namespace:kube-system,Attempt:0,} returns sandbox id \"f79c2543c856b516ce11b06c7b311d56f1bab6079aa90eae8e52834660ec1de6\"" Feb 13 20:17:39.586845 kubelet[2518]: E0213 20:17:39.585838 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:39.593046 containerd[1472]: time="2025-02-13T20:17:39.592987822Z" level=info msg="CreateContainer within sandbox \"f79c2543c856b516ce11b06c7b311d56f1bab6079aa90eae8e52834660ec1de6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:17:39.655772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226771019.mount: Deactivated successfully. Feb 13 20:17:39.673911 containerd[1472]: time="2025-02-13T20:17:39.673829797Z" level=info msg="CreateContainer within sandbox \"f79c2543c856b516ce11b06c7b311d56f1bab6079aa90eae8e52834660ec1de6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1cbc721dcffae0f2c82612717ed7369485b8ffcedc9050e2c79524f6827a7e92\"" Feb 13 20:17:39.676239 containerd[1472]: time="2025-02-13T20:17:39.675265923Z" level=info msg="StartContainer for \"1cbc721dcffae0f2c82612717ed7369485b8ffcedc9050e2c79524f6827a7e92\"" Feb 13 20:17:39.721910 systemd[1]: Started cri-containerd-1cbc721dcffae0f2c82612717ed7369485b8ffcedc9050e2c79524f6827a7e92.scope - libcontainer container 1cbc721dcffae0f2c82612717ed7369485b8ffcedc9050e2c79524f6827a7e92. Feb 13 20:17:39.806874 containerd[1472]: time="2025-02-13T20:17:39.806570714Z" level=info msg="StartContainer for \"1cbc721dcffae0f2c82612717ed7369485b8ffcedc9050e2c79524f6827a7e92\" returns successfully" Feb 13 20:17:40.428581 systemd-networkd[1367]: vethff65aa72: Gained IPv6LL Feb 13 20:17:40.457395 kubelet[2518]: E0213 20:17:40.453049 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:40.522741 kubelet[2518]: I0213 20:17:40.522582 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9hlcl" podStartSLOduration=24.522557493 podStartE2EDuration="24.522557493s" podCreationTimestamp="2025-02-13 20:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:40.482138821 +0000 UTC m=+27.783134742" watchObservedRunningTime="2025-02-13 20:17:40.522557493 +0000 UTC m=+27.823553405" Feb 13 20:17:40.993218 systemd-networkd[1367]: cni0: Gained IPv6LL Feb 13 20:17:41.207636 kubelet[2518]: E0213 20:17:41.207553 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:41.210534 containerd[1472]: time="2025-02-13T20:17:41.208676783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nnrpm,Uid:e1797f93-fa5b-4be2-8580-ade2203b8a0c,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:41.270611 kernel: cni0: port 2(vethf2e0dfc2) entered blocking state Feb 13 20:17:41.270762 kernel: cni0: port 2(vethf2e0dfc2) entered disabled state Feb 13 20:17:41.269147 systemd-networkd[1367]: vethf2e0dfc2: Link UP Feb 13 20:17:41.275310 kernel: vethf2e0dfc2: entered allmulticast mode Feb 13 20:17:41.278744 kernel: vethf2e0dfc2: entered promiscuous mode Feb 13 20:17:41.281208 kernel: cni0: port 2(vethf2e0dfc2) entered blocking state Feb 13 20:17:41.281354 kernel: cni0: port 2(vethf2e0dfc2) entered forwarding state Feb 13 20:17:41.294990 systemd-networkd[1367]: vethf2e0dfc2: Gained carrier Feb 13 20:17:41.305194 containerd[1472]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e938), "name":"cbr0", "type":"bridge"} Feb 13 20:17:41.305194 containerd[1472]: delegateAdd: netconf sent to delegate plugin: Feb 13 20:17:41.352976 containerd[1472]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T20:17:41.351244179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:41.352976 containerd[1472]: time="2025-02-13T20:17:41.351402394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:41.352976 containerd[1472]: time="2025-02-13T20:17:41.352670357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:41.352976 containerd[1472]: time="2025-02-13T20:17:41.352888213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:41.412919 systemd[1]: Started cri-containerd-f796e26dfa3989f289da378e2bb5812ad0ecd80a41799be45c6ab720f2a02a12.scope - libcontainer container f796e26dfa3989f289da378e2bb5812ad0ecd80a41799be45c6ab720f2a02a12. Feb 13 20:17:41.464081 kubelet[2518]: E0213 20:17:41.461889 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:41.524300 containerd[1472]: time="2025-02-13T20:17:41.524119587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nnrpm,Uid:e1797f93-fa5b-4be2-8580-ade2203b8a0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f796e26dfa3989f289da378e2bb5812ad0ecd80a41799be45c6ab720f2a02a12\"" Feb 13 20:17:41.529641 kubelet[2518]: E0213 20:17:41.528047 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:41.540720 containerd[1472]: time="2025-02-13T20:17:41.539490498Z" level=info msg="CreateContainer within sandbox \"f796e26dfa3989f289da378e2bb5812ad0ecd80a41799be45c6ab720f2a02a12\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:17:41.585251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3211085978.mount: Deactivated successfully. Feb 13 20:17:41.598068 containerd[1472]: time="2025-02-13T20:17:41.597971200Z" level=info msg="CreateContainer within sandbox \"f796e26dfa3989f289da378e2bb5812ad0ecd80a41799be45c6ab720f2a02a12\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"389c1b4c9e71580dc31fe9669fceb6d39b9a25572ded8b9a04fb315faf4e93c4\"" Feb 13 20:17:41.600784 containerd[1472]: time="2025-02-13T20:17:41.600721288Z" level=info msg="StartContainer for \"389c1b4c9e71580dc31fe9669fceb6d39b9a25572ded8b9a04fb315faf4e93c4\"" Feb 13 20:17:41.666068 systemd[1]: Started cri-containerd-389c1b4c9e71580dc31fe9669fceb6d39b9a25572ded8b9a04fb315faf4e93c4.scope - libcontainer container 389c1b4c9e71580dc31fe9669fceb6d39b9a25572ded8b9a04fb315faf4e93c4. Feb 13 20:17:41.735349 containerd[1472]: time="2025-02-13T20:17:41.735118898Z" level=info msg="StartContainer for \"389c1b4c9e71580dc31fe9669fceb6d39b9a25572ded8b9a04fb315faf4e93c4\" returns successfully" Feb 13 20:17:42.472693 kubelet[2518]: E0213 20:17:42.472648 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:42.473449 kubelet[2518]: E0213 20:17:42.473339 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:42.544243 kubelet[2518]: I0213 20:17:42.544080 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-nnrpm" podStartSLOduration=26.544020491 podStartE2EDuration="26.544020491s" podCreationTimestamp="2025-02-13 20:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:42.498514946 +0000 UTC m=+29.799510882" watchObservedRunningTime="2025-02-13 20:17:42.544020491 +0000 UTC m=+29.845016413" Feb 13 20:17:43.105359 systemd-networkd[1367]: vethf2e0dfc2: Gained IPv6LL Feb 13 20:17:43.475523 kubelet[2518]: E0213 20:17:43.475290 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:17:44.479750 kubelet[2518]: E0213 20:17:44.479533 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:18:05.592565 systemd[1]: Started sshd@7-147.182.251.87:22-147.75.109.163:59000.service - OpenSSH per-connection server daemon (147.75.109.163:59000). Feb 13 20:18:05.692476 sshd[3508]: Accepted publickey for core from 147.75.109.163 port 59000 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:05.699000 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:05.711202 systemd-logind[1448]: New session 8 of user core. Feb 13 20:18:05.717595 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:18:06.019086 sshd[3508]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:06.026874 systemd[1]: sshd@7-147.182.251.87:22-147.75.109.163:59000.service: Deactivated successfully. Feb 13 20:18:06.030528 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:18:06.032260 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:18:06.035524 systemd-logind[1448]: Removed session 8. Feb 13 20:18:11.046524 systemd[1]: Started sshd@8-147.182.251.87:22-147.75.109.163:40136.service - OpenSSH per-connection server daemon (147.75.109.163:40136). Feb 13 20:18:11.112830 sshd[3543]: Accepted publickey for core from 147.75.109.163 port 40136 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:11.116318 sshd[3543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:11.125805 systemd-logind[1448]: New session 9 of user core. Feb 13 20:18:11.140746 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:18:11.335687 sshd[3543]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:11.343628 systemd[1]: sshd@8-147.182.251.87:22-147.75.109.163:40136.service: Deactivated successfully. Feb 13 20:18:11.347106 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:18:11.348741 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:18:11.350686 systemd-logind[1448]: Removed session 9. Feb 13 20:18:16.363021 systemd[1]: Started sshd@9-147.182.251.87:22-147.75.109.163:40148.service - OpenSSH per-connection server daemon (147.75.109.163:40148). Feb 13 20:18:16.433414 sshd[3580]: Accepted publickey for core from 147.75.109.163 port 40148 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:16.434715 sshd[3580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:16.443324 systemd-logind[1448]: New session 10 of user core. Feb 13 20:18:16.451236 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:18:16.627212 sshd[3580]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:16.633086 systemd[1]: sshd@9-147.182.251.87:22-147.75.109.163:40148.service: Deactivated successfully. Feb 13 20:18:16.637129 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:18:16.638715 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:18:16.640389 systemd-logind[1448]: Removed session 10. Feb 13 20:18:21.210045 kubelet[2518]: E0213 20:18:21.209447 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:18:21.663476 systemd[1]: Started sshd@10-147.182.251.87:22-147.75.109.163:47598.service - OpenSSH per-connection server daemon (147.75.109.163:47598). Feb 13 20:18:21.737383 sshd[3617]: Accepted publickey for core from 147.75.109.163 port 47598 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:21.741519 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:21.749268 systemd-logind[1448]: New session 11 of user core. Feb 13 20:18:21.759349 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:18:22.061159 sshd[3617]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:22.076097 systemd[1]: sshd@10-147.182.251.87:22-147.75.109.163:47598.service: Deactivated successfully. Feb 13 20:18:22.081681 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:18:22.087312 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:18:22.126624 systemd[1]: Started sshd@11-147.182.251.87:22-147.75.109.163:47610.service - OpenSSH per-connection server daemon (147.75.109.163:47610). Feb 13 20:18:22.128229 systemd-logind[1448]: Removed session 11. Feb 13 20:18:22.219291 sshd[3631]: Accepted publickey for core from 147.75.109.163 port 47610 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:22.228219 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:22.238245 systemd-logind[1448]: New session 12 of user core. Feb 13 20:18:22.253428 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:18:22.765410 sshd[3631]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:22.788876 systemd[1]: Started sshd@12-147.182.251.87:22-147.75.109.163:47618.service - OpenSSH per-connection server daemon (147.75.109.163:47618). Feb 13 20:18:22.794210 systemd[1]: sshd@11-147.182.251.87:22-147.75.109.163:47610.service: Deactivated successfully. Feb 13 20:18:22.811799 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:18:22.823369 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:18:22.835822 systemd-logind[1448]: Removed session 12. Feb 13 20:18:22.934515 sshd[3640]: Accepted publickey for core from 147.75.109.163 port 47618 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:22.938398 sshd[3640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:22.950534 systemd-logind[1448]: New session 13 of user core. Feb 13 20:18:22.956338 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:18:23.266989 sshd[3640]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:23.302084 systemd[1]: sshd@12-147.182.251.87:22-147.75.109.163:47618.service: Deactivated successfully. Feb 13 20:18:23.314229 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:18:23.342887 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:18:23.350527 systemd-logind[1448]: Removed session 13. Feb 13 20:18:27.213187 kubelet[2518]: E0213 20:18:27.209677 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:18:28.305640 systemd[1]: Started sshd@13-147.182.251.87:22-147.75.109.163:47624.service - OpenSSH per-connection server daemon (147.75.109.163:47624). Feb 13 20:18:28.376174 sshd[3677]: Accepted publickey for core from 147.75.109.163 port 47624 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:28.379341 sshd[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:28.391466 systemd-logind[1448]: New session 14 of user core. Feb 13 20:18:28.400434 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:18:28.791157 sshd[3677]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:28.814852 systemd[1]: sshd@13-147.182.251.87:22-147.75.109.163:47624.service: Deactivated successfully. Feb 13 20:18:28.830661 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:18:28.836735 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:18:28.842298 systemd-logind[1448]: Removed session 14. Feb 13 20:18:33.817770 systemd[1]: Started sshd@14-147.182.251.87:22-147.75.109.163:54696.service - OpenSSH per-connection server daemon (147.75.109.163:54696). Feb 13 20:18:33.901851 sshd[3711]: Accepted publickey for core from 147.75.109.163 port 54696 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:33.902999 sshd[3711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:33.910896 systemd-logind[1448]: New session 15 of user core. Feb 13 20:18:33.918519 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:18:34.104104 sshd[3711]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:34.112367 systemd[1]: sshd@14-147.182.251.87:22-147.75.109.163:54696.service: Deactivated successfully. Feb 13 20:18:34.117334 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:18:34.133045 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:18:34.144654 systemd[1]: Started sshd@15-147.182.251.87:22-147.75.109.163:54710.service - OpenSSH per-connection server daemon (147.75.109.163:54710). Feb 13 20:18:34.148602 systemd-logind[1448]: Removed session 15. Feb 13 20:18:34.242451 sshd[3724]: Accepted publickey for core from 147.75.109.163 port 54710 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:34.246654 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:34.263598 systemd-logind[1448]: New session 16 of user core. Feb 13 20:18:34.272378 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:18:34.716299 sshd[3724]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:34.729041 systemd[1]: sshd@15-147.182.251.87:22-147.75.109.163:54710.service: Deactivated successfully. Feb 13 20:18:34.737549 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:18:34.741688 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:18:34.751622 systemd[1]: Started sshd@16-147.182.251.87:22-147.75.109.163:54714.service - OpenSSH per-connection server daemon (147.75.109.163:54714). Feb 13 20:18:34.754793 systemd-logind[1448]: Removed session 16. Feb 13 20:18:34.831902 sshd[3741]: Accepted publickey for core from 147.75.109.163 port 54714 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:34.832860 sshd[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:34.855176 systemd-logind[1448]: New session 17 of user core. Feb 13 20:18:34.862768 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:18:37.141812 sshd[3741]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:37.159675 systemd[1]: sshd@16-147.182.251.87:22-147.75.109.163:54714.service: Deactivated successfully. Feb 13 20:18:37.161045 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:18:37.166203 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:18:37.173188 systemd-logind[1448]: Removed session 17. Feb 13 20:18:37.188420 systemd[1]: Started sshd@17-147.182.251.87:22-147.75.109.163:54722.service - OpenSSH per-connection server daemon (147.75.109.163:54722). Feb 13 20:18:37.261858 sshd[3775]: Accepted publickey for core from 147.75.109.163 port 54722 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:37.265843 sshd[3775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:37.279521 systemd-logind[1448]: New session 18 of user core. Feb 13 20:18:37.288802 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:18:37.730566 sshd[3775]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:37.744581 systemd[1]: sshd@17-147.182.251.87:22-147.75.109.163:54722.service: Deactivated successfully. Feb 13 20:18:37.748542 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:18:37.755079 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:18:37.761548 systemd[1]: Started sshd@18-147.182.251.87:22-147.75.109.163:54724.service - OpenSSH per-connection server daemon (147.75.109.163:54724). Feb 13 20:18:37.764043 systemd-logind[1448]: Removed session 18. Feb 13 20:18:37.814188 sshd[3786]: Accepted publickey for core from 147.75.109.163 port 54724 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:37.816522 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:37.826838 systemd-logind[1448]: New session 19 of user core. Feb 13 20:18:37.829308 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:18:38.022300 sshd[3786]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:38.039192 systemd[1]: sshd@18-147.182.251.87:22-147.75.109.163:54724.service: Deactivated successfully. Feb 13 20:18:38.044779 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:18:38.050458 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:18:38.053347 systemd-logind[1448]: Removed session 19. Feb 13 20:18:40.229986 kubelet[2518]: E0213 20:18:40.229153 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:18:43.054571 systemd[1]: Started sshd@19-147.182.251.87:22-147.75.109.163:58182.service - OpenSSH per-connection server daemon (147.75.109.163:58182). Feb 13 20:18:43.185312 sshd[3820]: Accepted publickey for core from 147.75.109.163 port 58182 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:43.187289 sshd[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:43.209057 systemd-logind[1448]: New session 20 of user core. Feb 13 20:18:43.220494 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:18:43.504652 sshd[3820]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:43.523613 systemd[1]: sshd@19-147.182.251.87:22-147.75.109.163:58182.service: Deactivated successfully. Feb 13 20:18:43.527525 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:18:43.531672 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:18:43.537538 systemd-logind[1448]: Removed session 20. Feb 13 20:18:45.210266 kubelet[2518]: E0213 20:18:45.210195 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:18:48.536913 systemd[1]: Started sshd@20-147.182.251.87:22-147.75.109.163:58188.service - OpenSSH per-connection server daemon (147.75.109.163:58188). Feb 13 20:18:48.638473 sshd[3857]: Accepted publickey for core from 147.75.109.163 port 58188 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:48.641208 sshd[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:48.673418 systemd-logind[1448]: New session 21 of user core. Feb 13 20:18:48.685432 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:18:48.921559 sshd[3857]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:48.933354 systemd[1]: sshd@20-147.182.251.87:22-147.75.109.163:58188.service: Deactivated successfully. Feb 13 20:18:48.952465 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:18:48.958744 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:18:48.962316 systemd-logind[1448]: Removed session 21. Feb 13 20:18:49.212210 kubelet[2518]: E0213 20:18:49.210457 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:18:53.949545 systemd[1]: Started sshd@21-147.182.251.87:22-147.75.109.163:38872.service - OpenSSH per-connection server daemon (147.75.109.163:38872). Feb 13 20:18:54.013978 sshd[3892]: Accepted publickey for core from 147.75.109.163 port 38872 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:54.017186 sshd[3892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:54.028162 systemd-logind[1448]: New session 22 of user core. Feb 13 20:18:54.033323 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:18:54.207693 sshd[3892]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:54.215661 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:18:54.216535 systemd[1]: sshd@21-147.182.251.87:22-147.75.109.163:38872.service: Deactivated successfully. Feb 13 20:18:54.223745 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:18:54.229087 systemd-logind[1448]: Removed session 22. Feb 13 20:18:58.207294 kubelet[2518]: E0213 20:18:58.207233 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:18:59.225506 systemd[1]: Started sshd@22-147.182.251.87:22-147.75.109.163:38876.service - OpenSSH per-connection server daemon (147.75.109.163:38876). Feb 13 20:18:59.280118 sshd[3925]: Accepted publickey for core from 147.75.109.163 port 38876 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:59.283466 sshd[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:59.300729 systemd-logind[1448]: New session 23 of user core. Feb 13 20:18:59.312388 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:18:59.483163 sshd[3925]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:59.490505 systemd[1]: sshd@22-147.182.251.87:22-147.75.109.163:38876.service: Deactivated successfully. Feb 13 20:18:59.498522 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:18:59.502047 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:18:59.504565 systemd-logind[1448]: Removed session 23.